Test Report: Docker_Linux_crio 22054

                    
                      83cf6fd59e5d8f3d63346b28bfbd6fd8e1f567be:2025-12-07:42677
                    
                

Test fail (29/415)

Order failed test Duration
38 TestAddons/serial/Volcano 0.41
44 TestAddons/parallel/Registry 14.39
45 TestAddons/parallel/RegistryCreds 0.42
46 TestAddons/parallel/Ingress 148.09
47 TestAddons/parallel/InspektorGadget 5.3
48 TestAddons/parallel/MetricsServer 5.31
50 TestAddons/parallel/CSI 42.57
51 TestAddons/parallel/Headlamp 2.54
52 TestAddons/parallel/CloudSpanner 5.25
53 TestAddons/parallel/LocalPath 10.12
54 TestAddons/parallel/NvidiaDevicePlugin 5.3
55 TestAddons/parallel/Yakd 5.26
56 TestAddons/parallel/AmdGpuDevicePlugin 6.26
280 TestMultiControlPlane/serial/RestartCluster 263.15
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 2.52
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 2.88
294 TestJSONOutput/pause/Command 1.78
300 TestJSONOutput/unpause/Command 1.83
373 TestPause/serial/Pause 7.86
401 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.33
409 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.19
416 TestStartStop/group/old-k8s-version/serial/Pause 6.67
423 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.23
427 TestStartStop/group/no-preload/serial/Pause 6.17
429 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.6
440 TestStartStop/group/newest-cni/serial/Pause 6.37
441 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.4
454 TestStartStop/group/embed-certs/serial/Pause 6.77
461 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.15
x
+
TestAddons/serial/Volcano (0.41s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-746247 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-746247 addons disable volcano --alsologtostderr -v=1: exit status 11 (407.54691ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 22:56:48.733192  402724 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:56:48.733521  402724 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:56:48.733532  402724 out.go:374] Setting ErrFile to fd 2...
	I1207 22:56:48.733537  402724 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:56:48.733748  402724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 22:56:48.734048  402724 mustload.go:66] Loading cluster: addons-746247
	I1207 22:56:48.734384  402724 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:56:48.734406  402724 addons.go:622] checking whether the cluster is paused
	I1207 22:56:48.734497  402724 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:56:48.734519  402724 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:56:48.734874  402724 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:56:48.753805  402724 ssh_runner.go:195] Run: systemctl --version
	I1207 22:56:48.753853  402724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:56:48.771840  402724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:56:48.866559  402724 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 22:56:48.866649  402724 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 22:56:48.896674  402724 cri.go:89] found id: "15d6c69879b1c9d09b82e4b5031bbbf34135b1f9bd979dea9f9f0f72f6fd51c8"
	I1207 22:56:48.896698  402724 cri.go:89] found id: "5fb12f5f4df2a1240cc8c210ab01b8888c98b0e557e9f3cc7ca744b1cea7d969"
	I1207 22:56:48.896704  402724 cri.go:89] found id: "fe56a017640b65af58831a24e810c5770fc372ade72500a7ef5cde7d37f3ff2a"
	I1207 22:56:48.896709  402724 cri.go:89] found id: "504d8b39e428bcf1fba0674f9f798df8c411b5d88014118f294c3efb546d0697"
	I1207 22:56:48.896714  402724 cri.go:89] found id: "50ad042517d0afe511c861b3ef18e6f89845648a1770b53fd53f3cc495f5a87e"
	I1207 22:56:48.896718  402724 cri.go:89] found id: "b28acd3bc252ae2090058f6c5f790414100d389c691000c749b4cc4ffeaaa79b"
	I1207 22:56:48.896725  402724 cri.go:89] found id: "1dad0dc0225103ed53f3ee4143c3ceff2347afd54237a96641893e36d40210f3"
	I1207 22:56:48.896728  402724 cri.go:89] found id: "7e6ab6bbbad333b2ff082b8ea3bab7762ffc7ef0c2ab04730063a59583be7141"
	I1207 22:56:48.896731  402724 cri.go:89] found id: "2ee9d403c718ad1071a4191fc7909302e0c5c99a980da0841bc028a064062feb"
	I1207 22:56:48.896736  402724 cri.go:89] found id: "d235bae133495f0f39c9d96866f02fe9e69074a4fa3760b3ca2223c3c55f1fdc"
	I1207 22:56:48.896739  402724 cri.go:89] found id: "dd2a1ddd16307b90c23b79922c3c697d8af8058539cc18dde5ec83dbb37624e5"
	I1207 22:56:48.896742  402724 cri.go:89] found id: "08fe42979fddbd1da206b7da0fd7f120a51c3544d5765bb4437a2b3a850217cf"
	I1207 22:56:48.896745  402724 cri.go:89] found id: "79ffbf10d4d6ab250715b396039a119ab1754f8e92841abc0705ff75b50dddad"
	I1207 22:56:48.896748  402724 cri.go:89] found id: "0a5bc6342e0fa615eb4b4c3ff68c6b411b7597a99b09c0ddfbad42f794634308"
	I1207 22:56:48.896750  402724 cri.go:89] found id: "125a62d8c60a9ec08a22d06c8690567a309e13fd8ede4423ac18b3684ed3a1eb"
	I1207 22:56:48.896759  402724 cri.go:89] found id: "f0439486741224d12b7d1a01f1b4080435a3b8ef6cee51988784ad3f75baa93a"
	I1207 22:56:48.896776  402724 cri.go:89] found id: "c09a0b77cbea1a4048f11ca0f248eaeb2aceb4d39363d2dda5f5e7c8d69b2bac"
	I1207 22:56:48.896782  402724 cri.go:89] found id: "c7ac4b9dcfe980e1f0ca5380837549fae2f8f4737f218aa46ee31003340f1f0e"
	I1207 22:56:48.896784  402724 cri.go:89] found id: "d9470261de6e4a9958176fb20e77f6052bc581ef6fa6b17b1c7111575d256855"
	I1207 22:56:48.896787  402724 cri.go:89] found id: "4cd369ec2d01ec0d2cfe7dfec0cafb653f048fc6c10abf58dfc2c354f5a55a1e"
	I1207 22:56:48.896790  402724 cri.go:89] found id: "2f96412fe3f9d0a7efea60c8dc6942a2a0b32d17e4b7caa468ec8aaad5361efb"
	I1207 22:56:48.896793  402724 cri.go:89] found id: "070b82a22d636912841f98c83010d7d2b8a760e29cb7bd78b694310c4e09a191"
	I1207 22:56:48.896795  402724 cri.go:89] found id: "bbb24b899c6b3630a13d72e60f393052186f583f097e132d0109458022915856"
	I1207 22:56:48.896798  402724 cri.go:89] found id: "cb318a4f623488d2891c4bf7dee2a7de142b6991456ad8b7f7dbeb036b386a2c"
	I1207 22:56:48.896801  402724 cri.go:89] found id: ""
	I1207 22:56:48.896846  402724 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 22:56:49.005002  402724 out.go:203] 
	W1207 22:56:49.036474  402724 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:56:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:56:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1207 22:56:49.036530  402724 out.go:285] * 
	* 
	W1207 22:56:49.040960  402724 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 22:56:49.046391  402724 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-746247 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.41s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.320525ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-wsdqp" [56184daa-e3a4-46ca-b017-5a3dd986f623] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00264634s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-d7n5r" [bfdc5400-c591-460d-89bb-87f432c0b904] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00274314s
addons_test.go:392: (dbg) Run:  kubectl --context addons-746247 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-746247 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-746247 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.897703143s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-746247 ip
2025/12/07 22:57:16 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-746247 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-746247 addons disable registry --alsologtostderr -v=1: exit status 11 (253.588314ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 22:57:16.962112  405502 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:57:16.962353  405502 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:16.962361  405502 out.go:374] Setting ErrFile to fd 2...
	I1207 22:57:16.962366  405502 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:16.962557  405502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 22:57:16.962814  405502 mustload.go:66] Loading cluster: addons-746247
	I1207 22:57:16.963231  405502 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:16.963265  405502 addons.go:622] checking whether the cluster is paused
	I1207 22:57:16.963409  405502 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:16.963433  405502 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:57:16.963832  405502 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:57:16.982682  405502 ssh_runner.go:195] Run: systemctl --version
	I1207 22:57:16.982751  405502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:57:17.001849  405502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:57:17.095416  405502 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 22:57:17.095529  405502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 22:57:17.126510  405502 cri.go:89] found id: "15d6c69879b1c9d09b82e4b5031bbbf34135b1f9bd979dea9f9f0f72f6fd51c8"
	I1207 22:57:17.126545  405502 cri.go:89] found id: "5fb12f5f4df2a1240cc8c210ab01b8888c98b0e557e9f3cc7ca744b1cea7d969"
	I1207 22:57:17.126550  405502 cri.go:89] found id: "fe56a017640b65af58831a24e810c5770fc372ade72500a7ef5cde7d37f3ff2a"
	I1207 22:57:17.126553  405502 cri.go:89] found id: "504d8b39e428bcf1fba0674f9f798df8c411b5d88014118f294c3efb546d0697"
	I1207 22:57:17.126556  405502 cri.go:89] found id: "50ad042517d0afe511c861b3ef18e6f89845648a1770b53fd53f3cc495f5a87e"
	I1207 22:57:17.126560  405502 cri.go:89] found id: "b28acd3bc252ae2090058f6c5f790414100d389c691000c749b4cc4ffeaaa79b"
	I1207 22:57:17.126563  405502 cri.go:89] found id: "1dad0dc0225103ed53f3ee4143c3ceff2347afd54237a96641893e36d40210f3"
	I1207 22:57:17.126566  405502 cri.go:89] found id: "7e6ab6bbbad333b2ff082b8ea3bab7762ffc7ef0c2ab04730063a59583be7141"
	I1207 22:57:17.126568  405502 cri.go:89] found id: "2ee9d403c718ad1071a4191fc7909302e0c5c99a980da0841bc028a064062feb"
	I1207 22:57:17.126578  405502 cri.go:89] found id: "d235bae133495f0f39c9d96866f02fe9e69074a4fa3760b3ca2223c3c55f1fdc"
	I1207 22:57:17.126582  405502 cri.go:89] found id: "dd2a1ddd16307b90c23b79922c3c697d8af8058539cc18dde5ec83dbb37624e5"
	I1207 22:57:17.126585  405502 cri.go:89] found id: "08fe42979fddbd1da206b7da0fd7f120a51c3544d5765bb4437a2b3a850217cf"
	I1207 22:57:17.126588  405502 cri.go:89] found id: "79ffbf10d4d6ab250715b396039a119ab1754f8e92841abc0705ff75b50dddad"
	I1207 22:57:17.126591  405502 cri.go:89] found id: "0a5bc6342e0fa615eb4b4c3ff68c6b411b7597a99b09c0ddfbad42f794634308"
	I1207 22:57:17.126594  405502 cri.go:89] found id: "125a62d8c60a9ec08a22d06c8690567a309e13fd8ede4423ac18b3684ed3a1eb"
	I1207 22:57:17.126600  405502 cri.go:89] found id: "f0439486741224d12b7d1a01f1b4080435a3b8ef6cee51988784ad3f75baa93a"
	I1207 22:57:17.126606  405502 cri.go:89] found id: "c09a0b77cbea1a4048f11ca0f248eaeb2aceb4d39363d2dda5f5e7c8d69b2bac"
	I1207 22:57:17.126610  405502 cri.go:89] found id: "c7ac4b9dcfe980e1f0ca5380837549fae2f8f4737f218aa46ee31003340f1f0e"
	I1207 22:57:17.126613  405502 cri.go:89] found id: "d9470261de6e4a9958176fb20e77f6052bc581ef6fa6b17b1c7111575d256855"
	I1207 22:57:17.126616  405502 cri.go:89] found id: "4cd369ec2d01ec0d2cfe7dfec0cafb653f048fc6c10abf58dfc2c354f5a55a1e"
	I1207 22:57:17.126624  405502 cri.go:89] found id: "2f96412fe3f9d0a7efea60c8dc6942a2a0b32d17e4b7caa468ec8aaad5361efb"
	I1207 22:57:17.126627  405502 cri.go:89] found id: "070b82a22d636912841f98c83010d7d2b8a760e29cb7bd78b694310c4e09a191"
	I1207 22:57:17.126629  405502 cri.go:89] found id: "bbb24b899c6b3630a13d72e60f393052186f583f097e132d0109458022915856"
	I1207 22:57:17.126632  405502 cri.go:89] found id: "cb318a4f623488d2891c4bf7dee2a7de142b6991456ad8b7f7dbeb036b386a2c"
	I1207 22:57:17.126634  405502 cri.go:89] found id: ""
	I1207 22:57:17.126677  405502 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 22:57:17.141200  405502 out.go:203] 
	W1207 22:57:17.142471  405502 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1207 22:57:17.142495  405502 out.go:285] * 
	* 
	W1207 22:57:17.146479  405502 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 22:57:17.148023  405502 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-746247 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.39s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.42s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.262765ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-746247
addons_test.go:332: (dbg) Run:  kubectl --context addons-746247 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-746247 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-746247 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (250.104086ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 22:57:09.252561  404427 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:57:09.252829  404427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:09.252839  404427 out.go:374] Setting ErrFile to fd 2...
	I1207 22:57:09.252843  404427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:09.253059  404427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 22:57:09.253358  404427 mustload.go:66] Loading cluster: addons-746247
	I1207 22:57:09.253703  404427 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:09.253726  404427 addons.go:622] checking whether the cluster is paused
	I1207 22:57:09.253811  404427 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:09.253828  404427 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:57:09.254194  404427 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:57:09.272825  404427 ssh_runner.go:195] Run: systemctl --version
	I1207 22:57:09.272881  404427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:57:09.291773  404427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:57:09.385135  404427 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 22:57:09.385223  404427 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 22:57:09.415788  404427 cri.go:89] found id: "15d6c69879b1c9d09b82e4b5031bbbf34135b1f9bd979dea9f9f0f72f6fd51c8"
	I1207 22:57:09.415822  404427 cri.go:89] found id: "5fb12f5f4df2a1240cc8c210ab01b8888c98b0e557e9f3cc7ca744b1cea7d969"
	I1207 22:57:09.415826  404427 cri.go:89] found id: "fe56a017640b65af58831a24e810c5770fc372ade72500a7ef5cde7d37f3ff2a"
	I1207 22:57:09.415829  404427 cri.go:89] found id: "504d8b39e428bcf1fba0674f9f798df8c411b5d88014118f294c3efb546d0697"
	I1207 22:57:09.415832  404427 cri.go:89] found id: "50ad042517d0afe511c861b3ef18e6f89845648a1770b53fd53f3cc495f5a87e"
	I1207 22:57:09.415836  404427 cri.go:89] found id: "b28acd3bc252ae2090058f6c5f790414100d389c691000c749b4cc4ffeaaa79b"
	I1207 22:57:09.415839  404427 cri.go:89] found id: "1dad0dc0225103ed53f3ee4143c3ceff2347afd54237a96641893e36d40210f3"
	I1207 22:57:09.415842  404427 cri.go:89] found id: "7e6ab6bbbad333b2ff082b8ea3bab7762ffc7ef0c2ab04730063a59583be7141"
	I1207 22:57:09.415845  404427 cri.go:89] found id: "2ee9d403c718ad1071a4191fc7909302e0c5c99a980da0841bc028a064062feb"
	I1207 22:57:09.415854  404427 cri.go:89] found id: "d235bae133495f0f39c9d96866f02fe9e69074a4fa3760b3ca2223c3c55f1fdc"
	I1207 22:57:09.415857  404427 cri.go:89] found id: "dd2a1ddd16307b90c23b79922c3c697d8af8058539cc18dde5ec83dbb37624e5"
	I1207 22:57:09.415860  404427 cri.go:89] found id: "08fe42979fddbd1da206b7da0fd7f120a51c3544d5765bb4437a2b3a850217cf"
	I1207 22:57:09.415863  404427 cri.go:89] found id: "79ffbf10d4d6ab250715b396039a119ab1754f8e92841abc0705ff75b50dddad"
	I1207 22:57:09.415866  404427 cri.go:89] found id: "0a5bc6342e0fa615eb4b4c3ff68c6b411b7597a99b09c0ddfbad42f794634308"
	I1207 22:57:09.415869  404427 cri.go:89] found id: "125a62d8c60a9ec08a22d06c8690567a309e13fd8ede4423ac18b3684ed3a1eb"
	I1207 22:57:09.415881  404427 cri.go:89] found id: "f0439486741224d12b7d1a01f1b4080435a3b8ef6cee51988784ad3f75baa93a"
	I1207 22:57:09.415889  404427 cri.go:89] found id: "c09a0b77cbea1a4048f11ca0f248eaeb2aceb4d39363d2dda5f5e7c8d69b2bac"
	I1207 22:57:09.415894  404427 cri.go:89] found id: "c7ac4b9dcfe980e1f0ca5380837549fae2f8f4737f218aa46ee31003340f1f0e"
	I1207 22:57:09.415897  404427 cri.go:89] found id: "d9470261de6e4a9958176fb20e77f6052bc581ef6fa6b17b1c7111575d256855"
	I1207 22:57:09.415899  404427 cri.go:89] found id: "4cd369ec2d01ec0d2cfe7dfec0cafb653f048fc6c10abf58dfc2c354f5a55a1e"
	I1207 22:57:09.415902  404427 cri.go:89] found id: "2f96412fe3f9d0a7efea60c8dc6942a2a0b32d17e4b7caa468ec8aaad5361efb"
	I1207 22:57:09.415905  404427 cri.go:89] found id: "070b82a22d636912841f98c83010d7d2b8a760e29cb7bd78b694310c4e09a191"
	I1207 22:57:09.415908  404427 cri.go:89] found id: "bbb24b899c6b3630a13d72e60f393052186f583f097e132d0109458022915856"
	I1207 22:57:09.415911  404427 cri.go:89] found id: "cb318a4f623488d2891c4bf7dee2a7de142b6991456ad8b7f7dbeb036b386a2c"
	I1207 22:57:09.415913  404427 cri.go:89] found id: ""
	I1207 22:57:09.415972  404427 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 22:57:09.430768  404427 out.go:203] 
	W1207 22:57:09.431971  404427 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1207 22:57:09.431990  404427 out.go:285] * 
	* 
	W1207 22:57:09.436495  404427 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 22:57:09.437900  404427 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-746247 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.42s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (148.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-746247 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-746247 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-746247 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [956005ec-9e08-4d1b-812d-e92a7e12abd2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [956005ec-9e08-4d1b-812d-e92a7e12abd2] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003662638s
I1207 22:57:18.513513  393125 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-746247 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-746247 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.623685178s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-746247 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-746247 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-746247
helpers_test.go:243: (dbg) docker inspect addons-746247:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "080063613ae7b311e6fac990dd49efdbdefd2da2e0e17bc114805029bfe22ab8",
	        "Created": "2025-12-07T22:55:27.983832034Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 395583,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T22:55:28.025616139Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/080063613ae7b311e6fac990dd49efdbdefd2da2e0e17bc114805029bfe22ab8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/080063613ae7b311e6fac990dd49efdbdefd2da2e0e17bc114805029bfe22ab8/hostname",
	        "HostsPath": "/var/lib/docker/containers/080063613ae7b311e6fac990dd49efdbdefd2da2e0e17bc114805029bfe22ab8/hosts",
	        "LogPath": "/var/lib/docker/containers/080063613ae7b311e6fac990dd49efdbdefd2da2e0e17bc114805029bfe22ab8/080063613ae7b311e6fac990dd49efdbdefd2da2e0e17bc114805029bfe22ab8-json.log",
	        "Name": "/addons-746247",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-746247:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-746247",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "080063613ae7b311e6fac990dd49efdbdefd2da2e0e17bc114805029bfe22ab8",
	                "LowerDir": "/var/lib/docker/overlay2/b595ab9cfb55a9daf85c866674f02743973b1601addf08afcae02b59b38cf495-init/diff:/var/lib/docker/overlay2/d2e9c5481c0f5ed3745e4b3c85b207e8e3f273f5a1d285f7bc7bfa20976ad16e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b595ab9cfb55a9daf85c866674f02743973b1601addf08afcae02b59b38cf495/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b595ab9cfb55a9daf85c866674f02743973b1601addf08afcae02b59b38cf495/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b595ab9cfb55a9daf85c866674f02743973b1601addf08afcae02b59b38cf495/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-746247",
	                "Source": "/var/lib/docker/volumes/addons-746247/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-746247",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-746247",
	                "name.minikube.sigs.k8s.io": "addons-746247",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3d86637c0d851aa624d56a2b281f214498dfaa59d9f2878994009fab6db2049d",
	            "SandboxKey": "/var/run/docker/netns/3d86637c0d85",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-746247": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be54931499a886ac0b17f6a5741a36d0b71dc5f6d5ce5015072847b13448e7f6",
	                    "EndpointID": "be47e88703de538019e033354a322e232626840e947ae3bfe89166e4eb90973f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "be:24:c4:a1:f7:d4",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-746247",
	                        "080063613ae7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-746247 -n addons-746247
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-746247 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-746247 logs -n 25: (1.162202819s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-074233 --alsologtostderr --binary-mirror http://127.0.0.1:45187 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-074233 │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │                     │
	│ delete  │ -p binary-mirror-074233                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-074233 │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │ 07 Dec 25 22:55 UTC │
	│ addons  │ enable dashboard -p addons-746247                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │                     │
	│ addons  │ disable dashboard -p addons-746247                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │                     │
	│ start   │ -p addons-746247 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │ 07 Dec 25 22:56 UTC │
	│ addons  │ addons-746247 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:56 UTC │                     │
	│ addons  │ addons-746247 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:57 UTC │                     │
	│ addons  │ enable headlamp -p addons-746247 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:57 UTC │                     │
	│ addons  │ addons-746247 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:57 UTC │                     │
	│ addons  │ addons-746247 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:57 UTC │                     │
	│ addons  │ addons-746247 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:57 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-746247                                                                                                                                                                                                                                                                                                                                                                                           │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:57 UTC │ 07 Dec 25 22:57 UTC │
	│ addons  │ addons-746247 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:57 UTC │                     │
	│ addons  │ addons-746247 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:57 UTC │                     │
	│ addons  │ addons-746247 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:57 UTC │                     │
	│ ip      │ addons-746247 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:57 UTC │ 07 Dec 25 22:57 UTC │
	│ addons  │ addons-746247 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:57 UTC │                     │
	│ ssh     │ addons-746247 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:57 UTC │                     │
	│ ssh     │ addons-746247 ssh cat /opt/local-path-provisioner/pvc-6cdaae25-a8c6-4a95-9d95-59adcfad1439_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:57 UTC │ 07 Dec 25 22:57 UTC │
	│ addons  │ addons-746247 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:57 UTC │                     │
	│ addons  │ addons-746247 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:57 UTC │                     │
	│ addons  │ addons-746247 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:57 UTC │                     │
	│ addons  │ addons-746247 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:57 UTC │                     │
	│ addons  │ addons-746247 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:57 UTC │                     │
	│ ip      │ addons-746247 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-746247        │ jenkins │ v1.37.0 │ 07 Dec 25 22:59 UTC │ 07 Dec 25 22:59 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:55:07
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:55:07.877967  394947 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:55:07.878091  394947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:55:07.878100  394947 out.go:374] Setting ErrFile to fd 2...
	I1207 22:55:07.878105  394947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:55:07.878316  394947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 22:55:07.878883  394947 out.go:368] Setting JSON to false
	I1207 22:55:07.879777  394947 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5852,"bootTime":1765142256,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:55:07.879834  394947 start.go:143] virtualization: kvm guest
	I1207 22:55:07.881958  394947 out.go:179] * [addons-746247] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:55:07.883277  394947 notify.go:221] Checking for updates...
	I1207 22:55:07.883287  394947 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 22:55:07.884545  394947 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:55:07.885833  394947 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 22:55:07.887059  394947 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 22:55:07.888222  394947 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 22:55:07.889362  394947 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 22:55:07.890685  394947 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:55:07.917005  394947 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:55:07.917109  394947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:55:07.972282  394947 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-07 22:55:07.962463475 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:55:07.972416  394947 docker.go:319] overlay module found
	I1207 22:55:07.974972  394947 out.go:179] * Using the docker driver based on user configuration
	I1207 22:55:07.976048  394947 start.go:309] selected driver: docker
	I1207 22:55:07.976061  394947 start.go:927] validating driver "docker" against <nil>
	I1207 22:55:07.976072  394947 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 22:55:07.976664  394947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:55:08.036514  394947 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-07 22:55:08.026605684 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:55:08.036669  394947 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1207 22:55:08.036865  394947 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 22:55:08.038621  394947 out.go:179] * Using Docker driver with root privileges
	I1207 22:55:08.039725  394947 cni.go:84] Creating CNI manager for ""
	I1207 22:55:08.039808  394947 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 22:55:08.039824  394947 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1207 22:55:08.039909  394947 start.go:353] cluster config:
	{Name:addons-746247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-746247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1207 22:55:08.041150  394947 out.go:179] * Starting "addons-746247" primary control-plane node in "addons-746247" cluster
	I1207 22:55:08.042067  394947 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 22:55:08.043164  394947 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 22:55:08.044140  394947 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 22:55:08.044167  394947 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1207 22:55:08.044186  394947 cache.go:65] Caching tarball of preloaded images
	I1207 22:55:08.044258  394947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 22:55:08.044333  394947 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 22:55:08.044348  394947 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1207 22:55:08.044742  394947 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/config.json ...
	I1207 22:55:08.044771  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/config.json: {Name:mk1ec2873a49cec8dde6b1769bdcaef76c909bf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:08.061282  394947 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1207 22:55:08.061444  394947 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1207 22:55:08.061484  394947 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1207 22:55:08.061494  394947 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1207 22:55:08.061506  394947 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1207 22:55:08.061517  394947 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from local cache
	I1207 22:55:21.204767  394947 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from cached tarball
	I1207 22:55:21.204812  394947 cache.go:243] Successfully downloaded all kic artifacts
	I1207 22:55:21.204861  394947 start.go:360] acquireMachinesLock for addons-746247: {Name:mkdac485f32371369587267e2a039908da41c790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 22:55:21.204982  394947 start.go:364] duration metric: took 98.729µs to acquireMachinesLock for "addons-746247"
	I1207 22:55:21.205007  394947 start.go:93] Provisioning new machine with config: &{Name:addons-746247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-746247 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 22:55:21.205089  394947 start.go:125] createHost starting for "" (driver="docker")
	I1207 22:55:21.207243  394947 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1207 22:55:21.207530  394947 start.go:159] libmachine.API.Create for "addons-746247" (driver="docker")
	I1207 22:55:21.207582  394947 client.go:173] LocalClient.Create starting
	I1207 22:55:21.207702  394947 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem
	I1207 22:55:21.264446  394947 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem
	I1207 22:55:21.455469  394947 cli_runner.go:164] Run: docker network inspect addons-746247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1207 22:55:21.473006  394947 cli_runner.go:211] docker network inspect addons-746247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1207 22:55:21.473073  394947 network_create.go:284] running [docker network inspect addons-746247] to gather additional debugging logs...
	I1207 22:55:21.473095  394947 cli_runner.go:164] Run: docker network inspect addons-746247
	W1207 22:55:21.489353  394947 cli_runner.go:211] docker network inspect addons-746247 returned with exit code 1
	I1207 22:55:21.489405  394947 network_create.go:287] error running [docker network inspect addons-746247]: docker network inspect addons-746247: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-746247 not found
	I1207 22:55:21.489424  394947 network_create.go:289] output of [docker network inspect addons-746247]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-746247 not found
	
	** /stderr **
	I1207 22:55:21.489597  394947 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 22:55:21.506721  394947 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c10a70}
	I1207 22:55:21.506775  394947 network_create.go:124] attempt to create docker network addons-746247 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1207 22:55:21.506842  394947 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-746247 addons-746247
	I1207 22:55:21.554823  394947 network_create.go:108] docker network addons-746247 192.168.49.0/24 created
	I1207 22:55:21.554857  394947 kic.go:121] calculated static IP "192.168.49.2" for the "addons-746247" container
	I1207 22:55:21.554924  394947 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1207 22:55:21.571502  394947 cli_runner.go:164] Run: docker volume create addons-746247 --label name.minikube.sigs.k8s.io=addons-746247 --label created_by.minikube.sigs.k8s.io=true
	I1207 22:55:21.590910  394947 oci.go:103] Successfully created a docker volume addons-746247
	I1207 22:55:21.590989  394947 cli_runner.go:164] Run: docker run --rm --name addons-746247-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-746247 --entrypoint /usr/bin/test -v addons-746247:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1207 22:55:24.094955  394947 cli_runner.go:217] Completed: docker run --rm --name addons-746247-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-746247 --entrypoint /usr/bin/test -v addons-746247:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib: (2.503924226s)
	I1207 22:55:24.094987  394947 oci.go:107] Successfully prepared a docker volume addons-746247
	I1207 22:55:24.095054  394947 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 22:55:24.095068  394947 kic.go:194] Starting extracting preloaded images to volume ...
	I1207 22:55:24.095124  394947 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-746247:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1207 22:55:27.912478  394947 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-746247:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.817292018s)
	I1207 22:55:27.912512  394947 kic.go:203] duration metric: took 3.817440652s to extract preloaded images to volume ...
	W1207 22:55:27.912595  394947 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1207 22:55:27.912624  394947 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1207 22:55:27.912666  394947 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1207 22:55:27.966788  394947 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-746247 --name addons-746247 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-746247 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-746247 --network addons-746247 --ip 192.168.49.2 --volume addons-746247:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1207 22:55:28.243798  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Running}}
	I1207 22:55:28.263872  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:28.281972  394947 cli_runner.go:164] Run: docker exec addons-746247 stat /var/lib/dpkg/alternatives/iptables
	I1207 22:55:28.326214  394947 oci.go:144] the created container "addons-746247" has a running status.
	I1207 22:55:28.326244  394947 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa...
	I1207 22:55:28.338113  394947 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1207 22:55:28.362801  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:28.384954  394947 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1207 22:55:28.384977  394947 kic_runner.go:114] Args: [docker exec --privileged addons-746247 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1207 22:55:28.425294  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:28.447721  394947 machine.go:94] provisionDockerMachine start ...
	I1207 22:55:28.447834  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:28.470101  394947 main.go:143] libmachine: Using SSH client type: native
	I1207 22:55:28.470470  394947 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1207 22:55:28.470491  394947 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 22:55:28.471237  394947 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33934->127.0.0.1:33148: read: connection reset by peer
	I1207 22:55:31.602628  394947 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-746247
	
	I1207 22:55:31.602662  394947 ubuntu.go:182] provisioning hostname "addons-746247"
	I1207 22:55:31.602747  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:31.621851  394947 main.go:143] libmachine: Using SSH client type: native
	I1207 22:55:31.622077  394947 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1207 22:55:31.622092  394947 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-746247 && echo "addons-746247" | sudo tee /etc/hostname
	I1207 22:55:31.760414  394947 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-746247
	
	I1207 22:55:31.760536  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:31.779073  394947 main.go:143] libmachine: Using SSH client type: native
	I1207 22:55:31.779315  394947 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1207 22:55:31.779352  394947 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-746247' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-746247/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-746247' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 22:55:31.909356  394947 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 22:55:31.909390  394947 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 22:55:31.909443  394947 ubuntu.go:190] setting up certificates
	I1207 22:55:31.909467  394947 provision.go:84] configureAuth start
	I1207 22:55:31.909549  394947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-746247
	I1207 22:55:31.927898  394947 provision.go:143] copyHostCerts
	I1207 22:55:31.927982  394947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 22:55:31.928114  394947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 22:55:31.928187  394947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 22:55:31.928254  394947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.addons-746247 san=[127.0.0.1 192.168.49.2 addons-746247 localhost minikube]
	I1207 22:55:32.029545  394947 provision.go:177] copyRemoteCerts
	I1207 22:55:32.029611  394947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 22:55:32.029648  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:32.048378  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:32.143012  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 22:55:32.163547  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 22:55:32.182016  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1207 22:55:32.199811  394947 provision.go:87] duration metric: took 290.321463ms to configureAuth
	I1207 22:55:32.199845  394947 ubuntu.go:206] setting minikube options for container-runtime
	I1207 22:55:32.200051  394947 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:55:32.200165  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:32.218883  394947 main.go:143] libmachine: Using SSH client type: native
	I1207 22:55:32.219141  394947 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1207 22:55:32.219158  394947 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 22:55:32.494750  394947 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 22:55:32.494780  394947 machine.go:97] duration metric: took 4.04701985s to provisionDockerMachine
	I1207 22:55:32.494794  394947 client.go:176] duration metric: took 11.287202498s to LocalClient.Create
	I1207 22:55:32.494808  394947 start.go:167] duration metric: took 11.287280187s to libmachine.API.Create "addons-746247"
	I1207 22:55:32.494817  394947 start.go:293] postStartSetup for "addons-746247" (driver="docker")
	I1207 22:55:32.494829  394947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 22:55:32.494891  394947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 22:55:32.494941  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:32.512633  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:32.608355  394947 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 22:55:32.612206  394947 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 22:55:32.612233  394947 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 22:55:32.612246  394947 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 22:55:32.612311  394947 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 22:55:32.612365  394947 start.go:296] duration metric: took 117.540414ms for postStartSetup
	I1207 22:55:32.612685  394947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-746247
	I1207 22:55:32.630216  394947 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/config.json ...
	I1207 22:55:32.630539  394947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 22:55:32.630583  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:32.648357  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:32.740828  394947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 22:55:32.745718  394947 start.go:128] duration metric: took 11.540610455s to createHost
	I1207 22:55:32.745758  394947 start.go:83] releasing machines lock for "addons-746247", held for 11.540764054s
	I1207 22:55:32.745835  394947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-746247
	I1207 22:55:32.763862  394947 ssh_runner.go:195] Run: cat /version.json
	I1207 22:55:32.763910  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:32.763976  394947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 22:55:32.764065  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:32.782121  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:32.783160  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:32.927228  394947 ssh_runner.go:195] Run: systemctl --version
	I1207 22:55:32.934304  394947 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 22:55:32.970418  394947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 22:55:32.975421  394947 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 22:55:32.975501  394947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 22:55:33.002259  394947 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 22:55:33.002284  394947 start.go:496] detecting cgroup driver to use...
	I1207 22:55:33.002315  394947 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 22:55:33.002398  394947 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 22:55:33.019553  394947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 22:55:33.032649  394947 docker.go:218] disabling cri-docker service (if available) ...
	I1207 22:55:33.032723  394947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 22:55:33.049663  394947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 22:55:33.067499  394947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 22:55:33.151706  394947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 22:55:33.238552  394947 docker.go:234] disabling docker service ...
	I1207 22:55:33.238620  394947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 22:55:33.258358  394947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 22:55:33.271151  394947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 22:55:33.356271  394947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 22:55:33.439672  394947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 22:55:33.452840  394947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 22:55:33.467089  394947 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 22:55:33.467152  394947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 22:55:33.477450  394947 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 22:55:33.477522  394947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 22:55:33.486169  394947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 22:55:33.495142  394947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 22:55:33.504242  394947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 22:55:33.512505  394947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 22:55:33.521180  394947 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 22:55:33.534828  394947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 22:55:33.543772  394947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 22:55:33.550871  394947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 22:55:33.558319  394947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 22:55:33.639086  394947 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 22:55:33.776444  394947 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 22:55:33.776531  394947 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 22:55:33.780595  394947 start.go:564] Will wait 60s for crictl version
	I1207 22:55:33.780645  394947 ssh_runner.go:195] Run: which crictl
	I1207 22:55:33.784139  394947 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 22:55:33.810930  394947 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 22:55:33.811035  394947 ssh_runner.go:195] Run: crio --version
	I1207 22:55:33.839409  394947 ssh_runner.go:195] Run: crio --version
	I1207 22:55:33.869699  394947 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1207 22:55:33.870860  394947 cli_runner.go:164] Run: docker network inspect addons-746247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 22:55:33.888236  394947 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1207 22:55:33.892570  394947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 22:55:33.903004  394947 kubeadm.go:884] updating cluster {Name:addons-746247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-746247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 22:55:33.903142  394947 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 22:55:33.903192  394947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 22:55:33.935593  394947 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 22:55:33.935615  394947 crio.go:433] Images already preloaded, skipping extraction
	I1207 22:55:33.935661  394947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 22:55:33.961754  394947 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 22:55:33.961777  394947 cache_images.go:86] Images are preloaded, skipping loading
	I1207 22:55:33.961785  394947 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1207 22:55:33.961878  394947 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-746247 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-746247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 22:55:33.961940  394947 ssh_runner.go:195] Run: crio config
	I1207 22:55:34.006804  394947 cni.go:84] Creating CNI manager for ""
	I1207 22:55:34.006829  394947 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 22:55:34.006847  394947 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 22:55:34.006869  394947 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-746247 NodeName:addons-746247 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 22:55:34.006985  394947 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-746247"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 22:55:34.007053  394947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 22:55:34.015762  394947 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 22:55:34.015826  394947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 22:55:34.024072  394947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1207 22:55:34.037148  394947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 22:55:34.053353  394947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1207 22:55:34.066929  394947 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1207 22:55:34.070768  394947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 22:55:34.080977  394947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 22:55:34.161866  394947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 22:55:34.188017  394947 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247 for IP: 192.168.49.2
	I1207 22:55:34.188043  394947 certs.go:195] generating shared ca certs ...
	I1207 22:55:34.188063  394947 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:34.188229  394947 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 22:55:34.249472  394947 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt ...
	I1207 22:55:34.249503  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt: {Name:mkd69947a3567aa7d942ff19b503205a04e259b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:34.249687  394947 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key ...
	I1207 22:55:34.249700  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key: {Name:mk2e9ee7c00196d91bb45d703a62468cec7da9a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:34.249785  394947 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 22:55:34.311480  394947 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt ...
	I1207 22:55:34.311514  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt: {Name:mke7df825abb9dd8867e3bf7c96a7f60cd0e4178 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:34.311690  394947 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key ...
	I1207 22:55:34.311703  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key: {Name:mk1346b56082063bd94f4694763c569b1bb6e322 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:34.311775  394947 certs.go:257] generating profile certs ...
	I1207 22:55:34.311830  394947 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.key
	I1207 22:55:34.311844  394947 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt with IP's: []
	I1207 22:55:34.459886  394947 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt ...
	I1207 22:55:34.459919  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: {Name:mkda54fd8d145dcd877ec8773e9ab29431d85549 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:34.460096  394947 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.key ...
	I1207 22:55:34.460107  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.key: {Name:mk5d90aa2133412a9a7228d919ee55c2bf5e8d2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:34.460174  394947 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.key.7aa9af9f
	I1207 22:55:34.460194  394947 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.crt.7aa9af9f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1207 22:55:34.512853  394947 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.crt.7aa9af9f ...
	I1207 22:55:34.512882  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.crt.7aa9af9f: {Name:mk7435cc211dd19633fb876b7aac8cc207f2fb1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:34.513042  394947 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.key.7aa9af9f ...
	I1207 22:55:34.513055  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.key.7aa9af9f: {Name:mk7642198a25c6ebb0765ede998b554bfc92b3d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:34.513127  394947 certs.go:382] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.crt.7aa9af9f -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.crt
	I1207 22:55:34.513197  394947 certs.go:386] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.key.7aa9af9f -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.key
	I1207 22:55:34.513246  394947 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/proxy-client.key
	I1207 22:55:34.513264  394947 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/proxy-client.crt with IP's: []
	I1207 22:55:34.539020  394947 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/proxy-client.crt ...
	I1207 22:55:34.539051  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/proxy-client.crt: {Name:mk880bdd2296a66ea10ef4a4a54c6b9c4d0d737d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:34.539197  394947 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/proxy-client.key ...
	I1207 22:55:34.539211  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/proxy-client.key: {Name:mk380aecc8ca7a8b0bbbb2d69c01405c028eeba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:34.539395  394947 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 22:55:34.539440  394947 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 22:55:34.539465  394947 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 22:55:34.539495  394947 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 22:55:34.540084  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 22:55:34.559178  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 22:55:34.577378  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 22:55:34.595015  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 22:55:34.612564  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1207 22:55:34.629867  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 22:55:34.647472  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 22:55:34.665693  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 22:55:34.683157  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 22:55:34.702841  394947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 22:55:34.715410  394947 ssh_runner.go:195] Run: openssl version
	I1207 22:55:34.721638  394947 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:55:34.729187  394947 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 22:55:34.739401  394947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:55:34.743371  394947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:55:34.743427  394947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:55:34.777970  394947 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 22:55:34.786068  394947 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1207 22:55:34.793700  394947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 22:55:34.797444  394947 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1207 22:55:34.797494  394947 kubeadm.go:401] StartCluster: {Name:addons-746247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-746247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:55:34.797569  394947 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 22:55:34.797614  394947 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 22:55:34.824698  394947 cri.go:89] found id: ""
	I1207 22:55:34.824774  394947 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 22:55:34.832990  394947 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 22:55:34.841056  394947 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1207 22:55:34.841119  394947 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 22:55:34.849053  394947 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 22:55:34.849072  394947 kubeadm.go:158] found existing configuration files:
	
	I1207 22:55:34.849111  394947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1207 22:55:34.857140  394947 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1207 22:55:34.857210  394947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1207 22:55:34.864640  394947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1207 22:55:34.872159  394947 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1207 22:55:34.872209  394947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1207 22:55:34.879374  394947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1207 22:55:34.886894  394947 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1207 22:55:34.886944  394947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1207 22:55:34.894631  394947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1207 22:55:34.902360  394947 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1207 22:55:34.902426  394947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1207 22:55:34.909754  394947 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1207 22:55:34.947620  394947 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1207 22:55:34.947701  394947 kubeadm.go:319] [preflight] Running pre-flight checks
	I1207 22:55:34.981278  394947 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1207 22:55:34.981402  394947 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1207 22:55:34.981475  394947 kubeadm.go:319] OS: Linux
	I1207 22:55:34.981551  394947 kubeadm.go:319] CGROUPS_CPU: enabled
	I1207 22:55:34.981631  394947 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1207 22:55:34.981706  394947 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1207 22:55:34.981797  394947 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1207 22:55:34.981882  394947 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1207 22:55:34.981946  394947 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1207 22:55:34.982024  394947 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1207 22:55:34.982087  394947 kubeadm.go:319] CGROUPS_IO: enabled
	I1207 22:55:35.041416  394947 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 22:55:35.041571  394947 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 22:55:35.041727  394947 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1207 22:55:35.049517  394947 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 22:55:35.052510  394947 out.go:252]   - Generating certificates and keys ...
	I1207 22:55:35.052629  394947 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1207 22:55:35.052727  394947 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1207 22:55:35.336449  394947 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 22:55:35.371707  394947 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1207 22:55:35.842888  394947 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1207 22:55:35.919018  394947 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1207 22:55:36.037801  394947 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1207 22:55:36.037963  394947 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-746247 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1207 22:55:36.144113  394947 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1207 22:55:36.144260  394947 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-746247 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1207 22:55:36.322581  394947 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 22:55:36.793949  394947 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 22:55:37.264164  394947 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1207 22:55:37.264302  394947 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 22:55:37.454215  394947 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 22:55:37.575913  394947 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 22:55:37.708250  394947 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 22:55:37.884302  394947 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 22:55:37.943668  394947 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 22:55:37.944192  394947 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 22:55:37.948013  394947 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 22:55:37.949649  394947 out.go:252]   - Booting up control plane ...
	I1207 22:55:37.949759  394947 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 22:55:37.949848  394947 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 22:55:37.950599  394947 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 22:55:37.978499  394947 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 22:55:37.978640  394947 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1207 22:55:37.985244  394947 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1207 22:55:37.986238  394947 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 22:55:37.986321  394947 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1207 22:55:38.084863  394947 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1207 22:55:38.084993  394947 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1207 22:55:39.086596  394947 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00186562s
	I1207 22:55:39.090916  394947 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1207 22:55:39.091045  394947 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1207 22:55:39.091217  394947 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1207 22:55:39.091384  394947 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1207 22:55:40.967963  394947 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.877067951s
	I1207 22:55:41.279216  394947 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.188278544s
	I1207 22:55:42.592959  394947 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502005s
	I1207 22:55:42.608410  394947 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 22:55:42.620715  394947 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 22:55:42.630255  394947 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 22:55:42.630534  394947 kubeadm.go:319] [mark-control-plane] Marking the node addons-746247 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 22:55:42.637975  394947 kubeadm.go:319] [bootstrap-token] Using token: y88hmj.0itrb6u5xpqhln4u
	I1207 22:55:42.639462  394947 out.go:252]   - Configuring RBAC rules ...
	I1207 22:55:42.639606  394947 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 22:55:42.642464  394947 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 22:55:42.647610  394947 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 22:55:42.650838  394947 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 22:55:42.653312  394947 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 22:55:42.655666  394947 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 22:55:42.999761  394947 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 22:55:43.416415  394947 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1207 22:55:43.998632  394947 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1207 22:55:43.999491  394947 kubeadm.go:319] 
	I1207 22:55:43.999621  394947 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1207 22:55:43.999638  394947 kubeadm.go:319] 
	I1207 22:55:43.999728  394947 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1207 22:55:43.999738  394947 kubeadm.go:319] 
	I1207 22:55:43.999760  394947 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1207 22:55:43.999842  394947 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 22:55:43.999925  394947 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 22:55:43.999935  394947 kubeadm.go:319] 
	I1207 22:55:44.000031  394947 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1207 22:55:44.000043  394947 kubeadm.go:319] 
	I1207 22:55:44.000113  394947 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 22:55:44.000122  394947 kubeadm.go:319] 
	I1207 22:55:44.000198  394947 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1207 22:55:44.000264  394947 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 22:55:44.000375  394947 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 22:55:44.000392  394947 kubeadm.go:319] 
	I1207 22:55:44.000519  394947 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 22:55:44.000637  394947 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1207 22:55:44.000649  394947 kubeadm.go:319] 
	I1207 22:55:44.000782  394947 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token y88hmj.0itrb6u5xpqhln4u \
	I1207 22:55:44.000931  394947 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a6f9ffe32c21ad638ebba2743e15f014ccba55b6baef971adb92cbf8edf27a49 \
	I1207 22:55:44.000952  394947 kubeadm.go:319] 	--control-plane 
	I1207 22:55:44.000956  394947 kubeadm.go:319] 
	I1207 22:55:44.001084  394947 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1207 22:55:44.001093  394947 kubeadm.go:319] 
	I1207 22:55:44.001160  394947 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token y88hmj.0itrb6u5xpqhln4u \
	I1207 22:55:44.001284  394947 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a6f9ffe32c21ad638ebba2743e15f014ccba55b6baef971adb92cbf8edf27a49 
	I1207 22:55:44.002766  394947 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1207 22:55:44.002927  394947 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 22:55:44.002960  394947 cni.go:84] Creating CNI manager for ""
	I1207 22:55:44.002974  394947 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 22:55:44.004664  394947 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1207 22:55:44.005686  394947 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1207 22:55:44.010044  394947 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1207 22:55:44.010071  394947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1207 22:55:44.023840  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1207 22:55:44.235804  394947 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 22:55:44.235903  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:55:44.235938  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-746247 minikube.k8s.io/updated_at=2025_12_07T22_55_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47 minikube.k8s.io/name=addons-746247 minikube.k8s.io/primary=true
	I1207 22:55:44.247539  394947 ops.go:34] apiserver oom_adj: -16
	I1207 22:55:44.311719  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:55:44.812580  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:55:45.312423  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:55:45.811825  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:55:46.312815  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:55:46.811918  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:55:47.312623  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:55:47.812761  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:55:48.311811  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:55:48.378590  394947 kubeadm.go:1114] duration metric: took 4.142753066s to wait for elevateKubeSystemPrivileges
	I1207 22:55:48.378625  394947 kubeadm.go:403] duration metric: took 13.581135159s to StartCluster
	I1207 22:55:48.378643  394947 settings.go:142] acquiring lock: {Name:mk372e79badb9c8f25216fa891cff6dfa96ea2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:48.378770  394947 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 22:55:48.379198  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:48.379473  394947 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 22:55:48.379496  394947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 22:55:48.379537  394947 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1207 22:55:48.379654  394947 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:55:48.379673  394947 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-746247"
	I1207 22:55:48.379682  394947 addons.go:70] Setting yakd=true in profile "addons-746247"
	I1207 22:55:48.379701  394947 addons.go:70] Setting storage-provisioner=true in profile "addons-746247"
	I1207 22:55:48.379714  394947 addons.go:70] Setting default-storageclass=true in profile "addons-746247"
	I1207 22:55:48.379715  394947 addons.go:239] Setting addon yakd=true in "addons-746247"
	I1207 22:55:48.379722  394947 addons.go:239] Setting addon storage-provisioner=true in "addons-746247"
	I1207 22:55:48.379730  394947 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-746247"
	I1207 22:55:48.379723  394947 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-746247"
	I1207 22:55:48.379746  394947 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-746247"
	I1207 22:55:48.379760  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.379762  394947 addons.go:70] Setting ingress-dns=true in profile "addons-746247"
	I1207 22:55:48.379768  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.379769  394947 addons.go:70] Setting gcp-auth=true in profile "addons-746247"
	I1207 22:55:48.379774  394947 addons.go:239] Setting addon ingress-dns=true in "addons-746247"
	I1207 22:55:48.379783  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.379790  394947 mustload.go:66] Loading cluster: addons-746247
	I1207 22:55:48.379854  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.379752  394947 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-746247"
	I1207 22:55:48.379899  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.379962  394947 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:55:48.380129  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.380188  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.380253  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.380272  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.380286  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.380292  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.380511  394947 addons.go:70] Setting volcano=true in profile "addons-746247"
	I1207 22:55:48.380537  394947 addons.go:239] Setting addon volcano=true in "addons-746247"
	I1207 22:55:48.380570  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.380615  394947 addons.go:70] Setting volumesnapshots=true in profile "addons-746247"
	I1207 22:55:48.380640  394947 addons.go:239] Setting addon volumesnapshots=true in "addons-746247"
	I1207 22:55:48.380685  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.381048  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.381146  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.381279  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.381390  394947 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-746247"
	I1207 22:55:48.381416  394947 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-746247"
	I1207 22:55:48.381441  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.381697  394947 addons.go:70] Setting registry=true in profile "addons-746247"
	I1207 22:55:48.381723  394947 addons.go:239] Setting addon registry=true in "addons-746247"
	I1207 22:55:48.381751  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.382140  394947 addons.go:70] Setting cloud-spanner=true in profile "addons-746247"
	I1207 22:55:48.379742  394947 addons.go:70] Setting ingress=true in profile "addons-746247"
	I1207 22:55:48.382172  394947 addons.go:239] Setting addon cloud-spanner=true in "addons-746247"
	I1207 22:55:48.382179  394947 addons.go:239] Setting addon ingress=true in "addons-746247"
	I1207 22:55:48.382196  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.382207  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.382256  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.382275  394947 addons.go:70] Setting registry-creds=true in profile "addons-746247"
	I1207 22:55:48.382293  394947 addons.go:239] Setting addon registry-creds=true in "addons-746247"
	I1207 22:55:48.382322  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.382603  394947 out.go:179] * Verifying Kubernetes components...
	I1207 22:55:48.382776  394947 addons.go:70] Setting inspektor-gadget=true in profile "addons-746247"
	I1207 22:55:48.382797  394947 addons.go:239] Setting addon inspektor-gadget=true in "addons-746247"
	I1207 22:55:48.382822  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.383622  394947 addons.go:70] Setting metrics-server=true in profile "addons-746247"
	I1207 22:55:48.383675  394947 addons.go:239] Setting addon metrics-server=true in "addons-746247"
	I1207 22:55:48.383709  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.384578  394947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 22:55:48.384580  394947 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-746247"
	I1207 22:55:48.385359  394947 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-746247"
	I1207 22:55:48.394718  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.395362  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.395778  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.396409  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.396699  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.397604  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.399857  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.417494  394947 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1207 22:55:48.418860  394947 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1207 22:55:48.418957  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1207 22:55:48.419115  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.434049  394947 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1207 22:55:48.435322  394947 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1207 22:55:48.435358  394947 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1207 22:55:48.435430  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.438298  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.439871  394947 addons.go:239] Setting addon default-storageclass=true in "addons-746247"
	I1207 22:55:48.439924  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.440436  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.457878  394947 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1207 22:55:48.458006  394947 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1207 22:55:48.461297  394947 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1207 22:55:48.461339  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1207 22:55:48.461406  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.469706  394947 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 22:55:48.469736  394947 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 22:55:48.469808  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	W1207 22:55:48.477402  394947 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1207 22:55:48.479468  394947 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 22:55:48.479499  394947 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 22:55:48.479576  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.484547  394947 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1207 22:55:48.484572  394947 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1207 22:55:48.484547  394947 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1207 22:55:48.484547  394947 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1207 22:55:48.485893  394947 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1207 22:55:48.485940  394947 out.go:179]   - Using image docker.io/registry:3.0.0
	I1207 22:55:48.487038  394947 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1207 22:55:48.487057  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1207 22:55:48.487118  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.487384  394947 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1207 22:55:48.487499  394947 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1207 22:55:48.487511  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1207 22:55:48.487558  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.488980  394947 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1207 22:55:48.489037  394947 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1207 22:55:48.490239  394947 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1207 22:55:48.490257  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1207 22:55:48.490313  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.491551  394947 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-746247"
	I1207 22:55:48.491595  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.492092  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.492158  394947 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 22:55:48.492169  394947 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1207 22:55:48.492297  394947 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1207 22:55:48.493916  394947 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1207 22:55:48.493935  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1207 22:55:48.493988  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.497375  394947 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1207 22:55:48.497641  394947 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1207 22:55:48.497658  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1207 22:55:48.497719  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.498044  394947 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 22:55:48.498070  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 22:55:48.498123  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.500809  394947 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1207 22:55:48.501936  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.501962  394947 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1207 22:55:48.503823  394947 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1207 22:55:48.506452  394947 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1207 22:55:48.507531  394947 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1207 22:55:48.507552  394947 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1207 22:55:48.507635  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.513752  394947 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1207 22:55:48.514817  394947 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1207 22:55:48.514842  394947 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1207 22:55:48.514923  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.522213  394947 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1207 22:55:48.523223  394947 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1207 22:55:48.523251  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1207 22:55:48.523319  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.532478  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.540579  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.540479  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.565853  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.566352  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.571824  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.571983  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.572500  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.575094  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.586021  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.590462  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.591716  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.592905  394947 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1207 22:55:48.594345  394947 out.go:179]   - Using image docker.io/busybox:stable
	W1207 22:55:48.595012  394947 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1207 22:55:48.595048  394947 retry.go:31] will retry after 371.698503ms: ssh: handshake failed: EOF
	I1207 22:55:48.595614  394947 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1207 22:55:48.595643  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1207 22:55:48.595702  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.603267  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.611502  394947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 22:55:48.611559  394947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 22:55:48.632588  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.639184  394947 node_ready.go:35] waiting up to 6m0s for node "addons-746247" to be "Ready" ...
	I1207 22:55:48.692347  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1207 22:55:48.692721  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1207 22:55:48.701336  394947 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1207 22:55:48.701370  394947 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1207 22:55:48.709515  394947 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1207 22:55:48.709563  394947 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1207 22:55:48.720956  394947 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1207 22:55:48.720981  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1207 22:55:48.739153  394947 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1207 22:55:48.739178  394947 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1207 22:55:48.742627  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1207 22:55:48.746094  394947 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1207 22:55:48.746123  394947 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1207 22:55:48.746846  394947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 22:55:48.746873  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1207 22:55:48.762198  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1207 22:55:48.767334  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 22:55:48.767900  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1207 22:55:48.775682  394947 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1207 22:55:48.775710  394947 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1207 22:55:48.778356  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1207 22:55:48.780675  394947 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1207 22:55:48.780711  394947 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1207 22:55:48.783595  394947 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1207 22:55:48.783625  394947 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1207 22:55:48.783695  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 22:55:48.783793  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1207 22:55:48.783804  394947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 22:55:48.783859  394947 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 22:55:48.801109  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1207 22:55:48.810803  394947 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1207 22:55:48.810837  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1207 22:55:48.824932  394947 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1207 22:55:48.824967  394947 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1207 22:55:48.834994  394947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 22:55:48.835021  394947 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 22:55:48.845550  394947 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1207 22:55:48.845662  394947 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1207 22:55:48.868550  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1207 22:55:48.873612  394947 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1207 22:55:48.873718  394947 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1207 22:55:48.888394  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 22:55:48.897212  394947 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1207 22:55:48.897242  394947 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1207 22:55:48.926572  394947 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1207 22:55:48.926600  394947 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1207 22:55:48.934181  394947 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1207 22:55:48.934217  394947 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1207 22:55:48.977730  394947 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1207 22:55:48.977772  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1207 22:55:49.013149  394947 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1207 22:55:49.013181  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1207 22:55:49.031992  394947 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1207 22:55:49.032037  394947 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1207 22:55:49.054683  394947 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1207 22:55:49.080316  394947 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1207 22:55:49.080444  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1207 22:55:49.081540  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1207 22:55:49.126158  394947 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1207 22:55:49.126240  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1207 22:55:49.160188  394947 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1207 22:55:49.160298  394947 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1207 22:55:49.224109  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1207 22:55:49.227686  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1207 22:55:49.562077  394947 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-746247" context rescaled to 1 replicas
	I1207 22:55:50.013303  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.270633102s)
	I1207 22:55:50.013983  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.12550754s)
	I1207 22:55:50.014021  394947 addons.go:495] Verifying addon metrics-server=true in "addons-746247"
	I1207 22:55:50.013980  394947 addons.go:495] Verifying addon ingress=true in "addons-746247"
	I1207 22:55:50.013469  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.246111481s)
	I1207 22:55:50.013531  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.245574215s)
	I1207 22:55:50.013659  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.235275584s)
	I1207 22:55:50.013691  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.229970065s)
	I1207 22:55:50.013775  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.229948198s)
	I1207 22:55:50.014384  394947 addons.go:495] Verifying addon registry=true in "addons-746247"
	I1207 22:55:50.013857  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.212722554s)
	I1207 22:55:50.013911  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.145264255s)
	I1207 22:55:50.013390  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.251155154s)
	I1207 22:55:50.016688  394947 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-746247 service yakd-dashboard -n yakd-dashboard
	
	I1207 22:55:50.016688  394947 out.go:179] * Verifying ingress addon...
	I1207 22:55:50.016730  394947 out.go:179] * Verifying registry addon...
	I1207 22:55:50.019404  394947 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1207 22:55:50.019405  394947 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1207 22:55:50.025283  394947 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1207 22:55:50.025305  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:50.026557  394947 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1207 22:55:50.026581  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1207 22:55:50.027932  394947 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1207 22:55:50.484889  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.403258866s)
	W1207 22:55:50.484961  394947 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1207 22:55:50.484986  394947 retry.go:31] will retry after 352.93442ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1207 22:55:50.485265  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.261108175s)
	I1207 22:55:50.485303  394947 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-746247"
	I1207 22:55:50.485385  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.257604959s)
	I1207 22:55:50.488628  394947 out.go:179] * Verifying csi-hostpath-driver addon...
	I1207 22:55:50.491055  394947 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1207 22:55:50.495408  394947 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1207 22:55:50.495428  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:50.598287  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:50.598417  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1207 22:55:50.643208  394947 node_ready.go:57] node "addons-746247" has "Ready":"False" status (will retry)
	I1207 22:55:50.838181  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1207 22:55:50.995062  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:51.022573  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:51.022711  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:51.494545  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:51.522541  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:51.522635  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:51.994353  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:52.023356  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:52.023469  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:52.494860  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:52.595384  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:52.595570  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:52.994884  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:53.022517  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:53.022705  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1207 22:55:53.142989  394947 node_ready.go:57] node "addons-746247" has "Ready":"False" status (will retry)
	I1207 22:55:53.353652  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.515421268s)
	I1207 22:55:53.494468  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:53.523073  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:53.523258  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:53.994579  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:54.022308  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:54.022543  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:54.494702  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:54.522423  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:54.522507  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:54.994287  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:55.023137  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:55.023196  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:55.494479  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:55.523461  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:55.523608  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1207 22:55:55.642478  394947 node_ready.go:57] node "addons-746247" has "Ready":"False" status (will retry)
	I1207 22:55:55.995195  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:56.023162  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:56.023177  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:56.047400  394947 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1207 22:55:56.047467  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:56.065963  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:56.170574  394947 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1207 22:55:56.184190  394947 addons.go:239] Setting addon gcp-auth=true in "addons-746247"
	I1207 22:55:56.184258  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:56.184852  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:56.203245  394947 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1207 22:55:56.203320  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:56.222857  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:56.315851  394947 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1207 22:55:56.316923  394947 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1207 22:55:56.317892  394947 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1207 22:55:56.317912  394947 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1207 22:55:56.331739  394947 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1207 22:55:56.331765  394947 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1207 22:55:56.345721  394947 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1207 22:55:56.345742  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1207 22:55:56.359848  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1207 22:55:56.494991  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:56.522752  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:56.522832  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:56.672752  394947 addons.go:495] Verifying addon gcp-auth=true in "addons-746247"
	I1207 22:55:56.674099  394947 out.go:179] * Verifying gcp-auth addon...
	I1207 22:55:56.675875  394947 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1207 22:55:56.680263  394947 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1207 22:55:56.680287  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:55:56.995035  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:57.022386  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:57.022526  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:57.179304  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:55:57.494596  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:57.522316  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:57.522412  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:57.680160  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:55:57.994256  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:58.022960  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:58.023118  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1207 22:55:58.142921  394947 node_ready.go:57] node "addons-746247" has "Ready":"False" status (will retry)
	I1207 22:55:58.178919  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:55:58.494142  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:58.523165  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:58.523171  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:58.679648  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:55:58.994665  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:59.022740  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:59.022832  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:59.179043  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:55:59.494440  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:59.523238  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:59.523433  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:59.679744  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:55:59.994792  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:00.022610  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:00.022765  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:00.179283  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:00.494392  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:00.523292  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:00.523497  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1207 22:56:00.642217  394947 node_ready.go:57] node "addons-746247" has "Ready":"False" status (will retry)
	I1207 22:56:00.679309  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:00.999184  394947 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1207 22:56:00.999214  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:01.024318  394947 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1207 22:56:01.024381  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:01.024612  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:01.142293  394947 node_ready.go:49] node "addons-746247" is "Ready"
	I1207 22:56:01.142357  394947 node_ready.go:38] duration metric: took 12.503123434s for node "addons-746247" to be "Ready" ...
	I1207 22:56:01.142378  394947 api_server.go:52] waiting for apiserver process to appear ...
	I1207 22:56:01.142443  394947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 22:56:01.160923  394947 api_server.go:72] duration metric: took 12.781396676s to wait for apiserver process to appear ...
	I1207 22:56:01.160960  394947 api_server.go:88] waiting for apiserver healthz status ...
	I1207 22:56:01.160986  394947 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1207 22:56:01.166747  394947 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1207 22:56:01.167990  394947 api_server.go:141] control plane version: v1.34.2
	I1207 22:56:01.168028  394947 api_server.go:131] duration metric: took 7.059712ms to wait for apiserver health ...
	I1207 22:56:01.168039  394947 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 22:56:01.177516  394947 system_pods.go:59] 20 kube-system pods found
	I1207 22:56:01.177567  394947 system_pods.go:61] "amd-gpu-device-plugin-kblb2" [0d7d3c61-b559-4b2d-ad9c-0c55bd5a52ee] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1207 22:56:01.177579  394947 system_pods.go:61] "coredns-66bc5c9577-tphvv" [7beb0e82-6dc4-4096-af61-36892f47cffa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 22:56:01.177592  394947 system_pods.go:61] "csi-hostpath-attacher-0" [a5354250-4aeb-4575-aedb-24c6f8664823] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 22:56:01.177600  394947 system_pods.go:61] "csi-hostpath-resizer-0" [0706c1a6-d865-41e1-b896-5466613da19a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1207 22:56:01.177609  394947 system_pods.go:61] "csi-hostpathplugin-x5hj6" [4b6180c4-31ad-42af-bba8-c8c05417d718] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 22:56:01.177616  394947 system_pods.go:61] "etcd-addons-746247" [e4ee9c9c-4d01-4b73-ac5a-1bbbd97bbe79] Running
	I1207 22:56:01.177623  394947 system_pods.go:61] "kindnet-r872z" [64913453-1fd0-4d9e-80e0-f4e33f99b8ff] Running
	I1207 22:56:01.177628  394947 system_pods.go:61] "kube-apiserver-addons-746247" [501e8522-edbc-4fff-bb71-a85168d6c576] Running
	I1207 22:56:01.177635  394947 system_pods.go:61] "kube-controller-manager-addons-746247" [4ebf58bd-0977-4c58-b77d-e20f01592d9d] Running
	I1207 22:56:01.177643  394947 system_pods.go:61] "kube-ingress-dns-minikube" [b03239fb-2faa-41b2-bc04-248413da0752] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1207 22:56:01.177648  394947 system_pods.go:61] "kube-proxy-j7cvz" [9f89bed5-657e-40e5-b6d4-f90d6c36743e] Running
	I1207 22:56:01.177654  394947 system_pods.go:61] "kube-scheduler-addons-746247" [090303be-b2fa-46c7-bec7-ae11cd33ab78] Running
	I1207 22:56:01.177667  394947 system_pods.go:61] "metrics-server-85b7d694d7-jnsx9" [2733de69-8b13-43ab-8b4e-a11f01ca6694] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 22:56:01.177675  394947 system_pods.go:61] "nvidia-device-plugin-daemonset-gpckr" [db82d55a-0dbb-4348-a938-da80fe468a31] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 22:56:01.177684  394947 system_pods.go:61] "registry-6b586f9694-wsdqp" [56184daa-e3a4-46ca-b017-5a3dd986f623] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1207 22:56:01.177691  394947 system_pods.go:61] "registry-creds-764b6fb674-vl9gn" [fd8e6cfd-a85b-4980-b193-cf4b6f8bc5b4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1207 22:56:01.177700  394947 system_pods.go:61] "registry-proxy-d7n5r" [bfdc5400-c591-460d-89bb-87f432c0b904] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1207 22:56:01.177710  394947 system_pods.go:61] "snapshot-controller-7d9fbc56b8-lg5vk" [d2468d0f-bdb3-4321-a85a-ac7e3fc46b69] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:56:01.177723  394947 system_pods.go:61] "snapshot-controller-7d9fbc56b8-nzqtx" [c2ea9276-b890-436f-9681-f173192e1580] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:56:01.177733  394947 system_pods.go:61] "storage-provisioner" [f3580680-aa34-475b-a6a6-1c280b516ae0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 22:56:01.177747  394947 system_pods.go:74] duration metric: took 9.700592ms to wait for pod list to return data ...
	I1207 22:56:01.177762  394947 default_sa.go:34] waiting for default service account to be created ...
	I1207 22:56:01.179991  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:01.180680  394947 default_sa.go:45] found service account: "default"
	I1207 22:56:01.180708  394947 default_sa.go:55] duration metric: took 2.935344ms for default service account to be created ...
	I1207 22:56:01.180720  394947 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 22:56:01.185104  394947 system_pods.go:86] 20 kube-system pods found
	I1207 22:56:01.185149  394947 system_pods.go:89] "amd-gpu-device-plugin-kblb2" [0d7d3c61-b559-4b2d-ad9c-0c55bd5a52ee] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1207 22:56:01.185161  394947 system_pods.go:89] "coredns-66bc5c9577-tphvv" [7beb0e82-6dc4-4096-af61-36892f47cffa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 22:56:01.185170  394947 system_pods.go:89] "csi-hostpath-attacher-0" [a5354250-4aeb-4575-aedb-24c6f8664823] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 22:56:01.185178  394947 system_pods.go:89] "csi-hostpath-resizer-0" [0706c1a6-d865-41e1-b896-5466613da19a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1207 22:56:01.185188  394947 system_pods.go:89] "csi-hostpathplugin-x5hj6" [4b6180c4-31ad-42af-bba8-c8c05417d718] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 22:56:01.185193  394947 system_pods.go:89] "etcd-addons-746247" [e4ee9c9c-4d01-4b73-ac5a-1bbbd97bbe79] Running
	I1207 22:56:01.185199  394947 system_pods.go:89] "kindnet-r872z" [64913453-1fd0-4d9e-80e0-f4e33f99b8ff] Running
	I1207 22:56:01.185204  394947 system_pods.go:89] "kube-apiserver-addons-746247" [501e8522-edbc-4fff-bb71-a85168d6c576] Running
	I1207 22:56:01.185210  394947 system_pods.go:89] "kube-controller-manager-addons-746247" [4ebf58bd-0977-4c58-b77d-e20f01592d9d] Running
	I1207 22:56:01.185217  394947 system_pods.go:89] "kube-ingress-dns-minikube" [b03239fb-2faa-41b2-bc04-248413da0752] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1207 22:56:01.185224  394947 system_pods.go:89] "kube-proxy-j7cvz" [9f89bed5-657e-40e5-b6d4-f90d6c36743e] Running
	I1207 22:56:01.185229  394947 system_pods.go:89] "kube-scheduler-addons-746247" [090303be-b2fa-46c7-bec7-ae11cd33ab78] Running
	I1207 22:56:01.185236  394947 system_pods.go:89] "metrics-server-85b7d694d7-jnsx9" [2733de69-8b13-43ab-8b4e-a11f01ca6694] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 22:56:01.185243  394947 system_pods.go:89] "nvidia-device-plugin-daemonset-gpckr" [db82d55a-0dbb-4348-a938-da80fe468a31] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 22:56:01.185251  394947 system_pods.go:89] "registry-6b586f9694-wsdqp" [56184daa-e3a4-46ca-b017-5a3dd986f623] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1207 22:56:01.185258  394947 system_pods.go:89] "registry-creds-764b6fb674-vl9gn" [fd8e6cfd-a85b-4980-b193-cf4b6f8bc5b4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1207 22:56:01.185268  394947 system_pods.go:89] "registry-proxy-d7n5r" [bfdc5400-c591-460d-89bb-87f432c0b904] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1207 22:56:01.185281  394947 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lg5vk" [d2468d0f-bdb3-4321-a85a-ac7e3fc46b69] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:56:01.185292  394947 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nzqtx" [c2ea9276-b890-436f-9681-f173192e1580] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:56:01.185299  394947 system_pods.go:89] "storage-provisioner" [f3580680-aa34-475b-a6a6-1c280b516ae0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 22:56:01.185319  394947 retry.go:31] will retry after 198.97653ms: missing components: kube-dns
	I1207 22:56:01.390085  394947 system_pods.go:86] 20 kube-system pods found
	I1207 22:56:01.390125  394947 system_pods.go:89] "amd-gpu-device-plugin-kblb2" [0d7d3c61-b559-4b2d-ad9c-0c55bd5a52ee] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1207 22:56:01.390132  394947 system_pods.go:89] "coredns-66bc5c9577-tphvv" [7beb0e82-6dc4-4096-af61-36892f47cffa] Running
	I1207 22:56:01.390152  394947 system_pods.go:89] "csi-hostpath-attacher-0" [a5354250-4aeb-4575-aedb-24c6f8664823] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 22:56:01.390163  394947 system_pods.go:89] "csi-hostpath-resizer-0" [0706c1a6-d865-41e1-b896-5466613da19a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1207 22:56:01.390171  394947 system_pods.go:89] "csi-hostpathplugin-x5hj6" [4b6180c4-31ad-42af-bba8-c8c05417d718] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 22:56:01.390182  394947 system_pods.go:89] "etcd-addons-746247" [e4ee9c9c-4d01-4b73-ac5a-1bbbd97bbe79] Running
	I1207 22:56:01.390192  394947 system_pods.go:89] "kindnet-r872z" [64913453-1fd0-4d9e-80e0-f4e33f99b8ff] Running
	I1207 22:56:01.390198  394947 system_pods.go:89] "kube-apiserver-addons-746247" [501e8522-edbc-4fff-bb71-a85168d6c576] Running
	I1207 22:56:01.390203  394947 system_pods.go:89] "kube-controller-manager-addons-746247" [4ebf58bd-0977-4c58-b77d-e20f01592d9d] Running
	I1207 22:56:01.390211  394947 system_pods.go:89] "kube-ingress-dns-minikube" [b03239fb-2faa-41b2-bc04-248413da0752] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1207 22:56:01.390216  394947 system_pods.go:89] "kube-proxy-j7cvz" [9f89bed5-657e-40e5-b6d4-f90d6c36743e] Running
	I1207 22:56:01.390222  394947 system_pods.go:89] "kube-scheduler-addons-746247" [090303be-b2fa-46c7-bec7-ae11cd33ab78] Running
	I1207 22:56:01.390230  394947 system_pods.go:89] "metrics-server-85b7d694d7-jnsx9" [2733de69-8b13-43ab-8b4e-a11f01ca6694] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 22:56:01.390239  394947 system_pods.go:89] "nvidia-device-plugin-daemonset-gpckr" [db82d55a-0dbb-4348-a938-da80fe468a31] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 22:56:01.390255  394947 system_pods.go:89] "registry-6b586f9694-wsdqp" [56184daa-e3a4-46ca-b017-5a3dd986f623] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1207 22:56:01.390269  394947 system_pods.go:89] "registry-creds-764b6fb674-vl9gn" [fd8e6cfd-a85b-4980-b193-cf4b6f8bc5b4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1207 22:56:01.390277  394947 system_pods.go:89] "registry-proxy-d7n5r" [bfdc5400-c591-460d-89bb-87f432c0b904] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1207 22:56:01.390285  394947 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lg5vk" [d2468d0f-bdb3-4321-a85a-ac7e3fc46b69] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:56:01.390303  394947 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nzqtx" [c2ea9276-b890-436f-9681-f173192e1580] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:56:01.390310  394947 system_pods.go:89] "storage-provisioner" [f3580680-aa34-475b-a6a6-1c280b516ae0] Running
	I1207 22:56:01.390319  394947 system_pods.go:126] duration metric: took 209.59162ms to wait for k8s-apps to be running ...
	I1207 22:56:01.390362  394947 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 22:56:01.390415  394947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 22:56:01.404422  394947 system_svc.go:56] duration metric: took 14.048204ms WaitForService to wait for kubelet
	I1207 22:56:01.404457  394947 kubeadm.go:587] duration metric: took 13.02493717s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 22:56:01.404480  394947 node_conditions.go:102] verifying NodePressure condition ...
	I1207 22:56:01.407624  394947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 22:56:01.407654  394947 node_conditions.go:123] node cpu capacity is 8
	I1207 22:56:01.407669  394947 node_conditions.go:105] duration metric: took 3.182892ms to run NodePressure ...
	I1207 22:56:01.407686  394947 start.go:242] waiting for startup goroutines ...
	I1207 22:56:01.494832  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:01.522645  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:01.522649  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:01.679447  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:01.995201  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:02.023378  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:02.023565  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:02.179492  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:02.494615  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:02.595859  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:02.595916  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:02.680870  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:02.996634  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:03.023862  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:03.024219  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:03.180019  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:03.495307  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:03.523096  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:03.523195  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:03.680573  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:03.995603  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:04.023908  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:04.023908  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:04.180096  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:04.494130  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:04.523568  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:04.523615  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:04.679867  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:04.995619  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:05.022899  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:05.022929  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:05.179067  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:05.496154  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:05.523072  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:05.523549  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:05.679917  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:05.994806  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:06.024643  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:06.024694  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:06.180229  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:06.494215  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:06.523077  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:06.523105  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:06.679215  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:06.995054  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:07.023688  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:07.024039  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:07.179965  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:07.494965  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:07.522771  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:07.522851  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:07.680080  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:07.994273  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:08.024084  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:08.024141  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:08.179098  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:08.494937  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:08.522911  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:08.522939  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:08.679135  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:08.994206  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:09.022971  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:09.023046  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:09.178711  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:09.495683  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:09.522235  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:09.522393  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:09.679207  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:09.994632  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:10.022879  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:10.022905  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:10.179570  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:10.495384  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:10.523613  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:10.523689  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:10.680196  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:10.995479  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:11.023698  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:11.023723  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:11.179911  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:11.495354  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:11.523120  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:11.523154  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:11.679594  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:11.994809  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:12.022421  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:12.022554  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:12.179691  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:12.495587  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:12.596073  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:12.596350  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:12.679030  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:12.994803  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:13.022674  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:13.022709  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:13.179667  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:13.495055  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:13.522734  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:13.522792  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:13.679922  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:13.995589  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:14.023292  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:14.023350  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:14.179146  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:14.501017  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:14.522916  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:14.523151  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:14.679436  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:14.994630  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:15.023552  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:15.023610  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:15.179948  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:15.495612  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:15.523280  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:15.523341  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:15.679407  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:15.994556  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:16.023574  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:16.023732  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:16.179759  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:16.496082  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:16.523305  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:16.523369  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:16.680218  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:16.995804  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:17.022698  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:17.022871  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:17.179923  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:17.549402  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:17.549437  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:17.549677  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:17.679510  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:17.995187  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:18.023159  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:18.023310  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:18.180318  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:18.495211  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:18.595655  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:18.595692  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:18.679275  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:18.994866  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:19.022902  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:19.023163  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:19.179213  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:19.494526  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:19.523252  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:19.523506  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:19.679405  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:19.995272  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:20.023074  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:20.023389  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:20.178824  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:20.495569  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:20.523159  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:20.523438  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:20.679219  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:20.994639  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:21.023714  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:21.023815  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:21.178856  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:21.495478  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:21.523473  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:21.523541  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:21.679360  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:21.995316  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:22.022780  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:22.022854  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:22.180549  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:22.541773  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:22.542556  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:22.542741  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:22.686106  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:22.995162  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:23.095704  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:23.095704  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:23.179272  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:23.497161  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:23.522762  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:23.522814  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:23.679955  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:23.995461  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:24.023345  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:24.023595  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:24.179292  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:24.495048  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:24.595803  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:24.595823  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:24.679543  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:24.995106  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:25.022971  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:25.023125  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:25.178820  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:25.496897  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:25.597507  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:25.597688  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:25.697760  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:25.994973  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:26.022636  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:26.022842  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:26.179171  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:26.494765  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:26.522695  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:26.522717  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:26.679142  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:26.994282  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:27.023055  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:27.023052  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:27.179126  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:27.495088  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:27.522993  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:27.523029  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:27.678955  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:27.994376  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:28.023013  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:28.023059  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:28.179399  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:28.494468  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:28.523390  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:28.523728  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:28.679736  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:28.995082  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:29.022873  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:29.023116  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:29.178779  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:29.496007  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:29.523361  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:29.523444  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:29.679378  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:29.994630  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:30.023958  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:30.024406  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:30.179876  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:30.495180  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:30.523233  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:30.523257  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:30.679496  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:30.994878  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:31.022671  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:31.022835  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:31.179693  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:31.494892  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:31.522734  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:31.522793  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:31.679736  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:31.994974  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:32.022915  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:32.023064  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:32.180149  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:32.495176  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:32.596061  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:32.596131  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:32.696193  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:32.994769  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:33.022910  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:33.022916  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:33.179422  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:33.493953  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:33.522466  394947 kapi.go:107] duration metric: took 43.50305903s to wait for kubernetes.io/minikube-addons=registry ...
	I1207 22:56:33.522565  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:33.681341  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:33.994804  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:34.023881  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:34.180801  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:34.533032  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:34.547640  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:34.679664  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:34.995428  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:35.023208  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:35.178958  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:35.494591  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:35.523585  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:35.682247  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:35.997138  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:36.024548  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:36.179492  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:36.496547  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:36.523185  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:36.679115  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:36.994351  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:37.023565  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:37.179919  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:37.494748  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:37.522912  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:37.680159  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:37.994959  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:38.022757  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:38.179377  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:38.494490  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:38.523375  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:38.679432  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:38.995144  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:39.023357  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:39.179648  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:39.496491  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:39.596840  394947 kapi.go:107] duration metric: took 49.577429437s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1207 22:56:39.679232  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:39.995134  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:40.178873  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:40.493955  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:40.718207  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:40.994616  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:41.179939  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:41.495640  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:41.680389  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:41.994847  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:42.180182  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:42.494811  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:42.680459  394947 kapi.go:107] duration metric: took 46.004581818s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1207 22:56:42.682229  394947 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-746247 cluster.
	I1207 22:56:42.683867  394947 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1207 22:56:42.687060  394947 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1207 22:56:42.995303  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:43.494736  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:43.994566  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:44.495379  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:44.994398  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:45.494587  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:45.995386  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:46.494493  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:46.995144  394947 kapi.go:107] duration metric: took 56.504088937s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1207 22:56:46.996992  394947 out.go:179] * Enabled addons: nvidia-device-plugin, ingress-dns, metrics-server, storage-provisioner, amd-gpu-device-plugin, inspektor-gadget, cloud-spanner, yakd, storage-provisioner-rancher, registry-creds, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1207 22:56:46.998234  394947 addons.go:530] duration metric: took 58.618697865s for enable addons: enabled=[nvidia-device-plugin ingress-dns metrics-server storage-provisioner amd-gpu-device-plugin inspektor-gadget cloud-spanner yakd storage-provisioner-rancher registry-creds volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1207 22:56:46.998279  394947 start.go:247] waiting for cluster config update ...
	I1207 22:56:46.998302  394947 start.go:256] writing updated cluster config ...
	I1207 22:56:46.998606  394947 ssh_runner.go:195] Run: rm -f paused
	I1207 22:56:47.002993  394947 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 22:56:47.006417  394947 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tphvv" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:47.010943  394947 pod_ready.go:94] pod "coredns-66bc5c9577-tphvv" is "Ready"
	I1207 22:56:47.010979  394947 pod_ready.go:86] duration metric: took 4.536878ms for pod "coredns-66bc5c9577-tphvv" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:47.013268  394947 pod_ready.go:83] waiting for pod "etcd-addons-746247" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:47.017464  394947 pod_ready.go:94] pod "etcd-addons-746247" is "Ready"
	I1207 22:56:47.017490  394947 pod_ready.go:86] duration metric: took 4.195356ms for pod "etcd-addons-746247" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:47.019467  394947 pod_ready.go:83] waiting for pod "kube-apiserver-addons-746247" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:47.023126  394947 pod_ready.go:94] pod "kube-apiserver-addons-746247" is "Ready"
	I1207 22:56:47.023147  394947 pod_ready.go:86] duration metric: took 3.660703ms for pod "kube-apiserver-addons-746247" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:47.025010  394947 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-746247" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:47.407220  394947 pod_ready.go:94] pod "kube-controller-manager-addons-746247" is "Ready"
	I1207 22:56:47.407248  394947 pod_ready.go:86] duration metric: took 382.220157ms for pod "kube-controller-manager-addons-746247" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:47.607770  394947 pod_ready.go:83] waiting for pod "kube-proxy-j7cvz" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:48.006940  394947 pod_ready.go:94] pod "kube-proxy-j7cvz" is "Ready"
	I1207 22:56:48.006968  394947 pod_ready.go:86] duration metric: took 399.164571ms for pod "kube-proxy-j7cvz" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:48.207440  394947 pod_ready.go:83] waiting for pod "kube-scheduler-addons-746247" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:48.607352  394947 pod_ready.go:94] pod "kube-scheduler-addons-746247" is "Ready"
	I1207 22:56:48.607387  394947 pod_ready.go:86] duration metric: took 399.913886ms for pod "kube-scheduler-addons-746247" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:48.607404  394947 pod_ready.go:40] duration metric: took 1.604378815s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 22:56:48.654476  394947 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1207 22:56:48.656460  394947 out.go:179] * Done! kubectl is now configured to use "addons-746247" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 07 22:58:18 addons-746247 crio[772]: time="2025-12-07T22:58:18.823089869Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-vl9gn/registry-creds" id=1e62eb36-53bc-4260-8499-def13adc9d0e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 22:58:18 addons-746247 crio[772]: time="2025-12-07T22:58:18.823255227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 22:58:18 addons-746247 crio[772]: time="2025-12-07T22:58:18.830934469Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 22:58:18 addons-746247 crio[772]: time="2025-12-07T22:58:18.831378126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 22:58:18 addons-746247 crio[772]: time="2025-12-07T22:58:18.859220464Z" level=info msg="Created container 1bbc56671742a86191f61a7de2faeeacf791ee3faf44489c27fa25223162165a: kube-system/registry-creds-764b6fb674-vl9gn/registry-creds" id=1e62eb36-53bc-4260-8499-def13adc9d0e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 22:58:18 addons-746247 crio[772]: time="2025-12-07T22:58:18.859828842Z" level=info msg="Starting container: 1bbc56671742a86191f61a7de2faeeacf791ee3faf44489c27fa25223162165a" id=8d0c5930-4ac0-4027-b5b8-97cf9f030d30 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 22:58:18 addons-746247 crio[772]: time="2025-12-07T22:58:18.861544636Z" level=info msg="Started container" PID=8842 containerID=1bbc56671742a86191f61a7de2faeeacf791ee3faf44489c27fa25223162165a description=kube-system/registry-creds-764b6fb674-vl9gn/registry-creds id=8d0c5930-4ac0-4027-b5b8-97cf9f030d30 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f2dc61df691e50add0171895c7756b7ec3beae985c2f51c73e825b43ffafeb6d
	Dec 07 22:58:43 addons-746247 crio[772]: time="2025-12-07T22:58:43.290744583Z" level=info msg="Stopping pod sandbox: 4002f50fc303501387c6814eec00c5a6677a3fca1e5d4bddb15c122f6f643813" id=10539599-a655-4aa5-8e0a-b4609eb6ab2e name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 07 22:58:43 addons-746247 crio[772]: time="2025-12-07T22:58:43.290805137Z" level=info msg="Stopped pod sandbox (already stopped): 4002f50fc303501387c6814eec00c5a6677a3fca1e5d4bddb15c122f6f643813" id=10539599-a655-4aa5-8e0a-b4609eb6ab2e name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 07 22:58:43 addons-746247 crio[772]: time="2025-12-07T22:58:43.2911208Z" level=info msg="Removing pod sandbox: 4002f50fc303501387c6814eec00c5a6677a3fca1e5d4bddb15c122f6f643813" id=1d7df7c2-c478-41c1-a6da-51a35049d76d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 07 22:58:43 addons-746247 crio[772]: time="2025-12-07T22:58:43.294299336Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 07 22:58:43 addons-746247 crio[772]: time="2025-12-07T22:58:43.294383713Z" level=info msg="Removed pod sandbox: 4002f50fc303501387c6814eec00c5a6677a3fca1e5d4bddb15c122f6f643813" id=1d7df7c2-c478-41c1-a6da-51a35049d76d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 07 22:59:33 addons-746247 crio[772]: time="2025-12-07T22:59:33.574224543Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-p8mwn/POD" id=554311e4-4e14-492b-b78a-47289bd26fd3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 22:59:33 addons-746247 crio[772]: time="2025-12-07T22:59:33.574296691Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 22:59:33 addons-746247 crio[772]: time="2025-12-07T22:59:33.58074758Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-p8mwn Namespace:default ID:82060d2eaed581c6c23b3d6003af75b1d5c76c02e0a90f194d501ed3ae90bbf6 UID:548c8faa-c351-41bd-bb64-07106d611afa NetNS:/var/run/netns/345692dc-15bb-4bd3-bd79-e4dd9ee36213 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000aba750}] Aliases:map[]}"
	Dec 07 22:59:33 addons-746247 crio[772]: time="2025-12-07T22:59:33.58079084Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-p8mwn to CNI network \"kindnet\" (type=ptp)"
	Dec 07 22:59:33 addons-746247 crio[772]: time="2025-12-07T22:59:33.592462245Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-p8mwn Namespace:default ID:82060d2eaed581c6c23b3d6003af75b1d5c76c02e0a90f194d501ed3ae90bbf6 UID:548c8faa-c351-41bd-bb64-07106d611afa NetNS:/var/run/netns/345692dc-15bb-4bd3-bd79-e4dd9ee36213 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000aba750}] Aliases:map[]}"
	Dec 07 22:59:33 addons-746247 crio[772]: time="2025-12-07T22:59:33.592607885Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-p8mwn for CNI network kindnet (type=ptp)"
	Dec 07 22:59:33 addons-746247 crio[772]: time="2025-12-07T22:59:33.593526943Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 07 22:59:33 addons-746247 crio[772]: time="2025-12-07T22:59:33.594284872Z" level=info msg="Ran pod sandbox 82060d2eaed581c6c23b3d6003af75b1d5c76c02e0a90f194d501ed3ae90bbf6 with infra container: default/hello-world-app-5d498dc89-p8mwn/POD" id=554311e4-4e14-492b-b78a-47289bd26fd3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 22:59:33 addons-746247 crio[772]: time="2025-12-07T22:59:33.595505092Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=0bf8a027-d503-423e-b30e-6dab72a4aa8f name=/runtime.v1.ImageService/ImageStatus
	Dec 07 22:59:33 addons-746247 crio[772]: time="2025-12-07T22:59:33.595661537Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=0bf8a027-d503-423e-b30e-6dab72a4aa8f name=/runtime.v1.ImageService/ImageStatus
	Dec 07 22:59:33 addons-746247 crio[772]: time="2025-12-07T22:59:33.595707147Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=0bf8a027-d503-423e-b30e-6dab72a4aa8f name=/runtime.v1.ImageService/ImageStatus
	Dec 07 22:59:33 addons-746247 crio[772]: time="2025-12-07T22:59:33.596361996Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=fa41beb2-f7d7-4ca9-86a1-2ebbdcebea6a name=/runtime.v1.ImageService/PullImage
	Dec 07 22:59:33 addons-746247 crio[772]: time="2025-12-07T22:59:33.603610724Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	1bbc56671742a       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   f2dc61df691e5       registry-creds-764b6fb674-vl9gn            kube-system
	c51b8e4dc0290       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago        Running             nginx                                    0                   38d2ff810e777       nginx                                      default
	84c3df26fdfc5       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   8226662d2fc5f       busybox                                    default
	15d6c69879b1c       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago        Running             csi-snapshotter                          0                   2df6bb8a249f2       csi-hostpathplugin-x5hj6                   kube-system
	5fb12f5f4df2a       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago        Running             csi-provisioner                          0                   2df6bb8a249f2       csi-hostpathplugin-x5hj6                   kube-system
	fe56a017640b6       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago        Running             liveness-probe                           0                   2df6bb8a249f2       csi-hostpathplugin-x5hj6                   kube-system
	504d8b39e428b       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago        Running             hostpath                                 0                   2df6bb8a249f2       csi-hostpathplugin-x5hj6                   kube-system
	0cc657ef96c6a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago        Running             gcp-auth                                 0                   a7b56cb6029cb       gcp-auth-78565c9fb4-x8dr5                  gcp-auth
	50ad042517d0a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago        Running             node-driver-registrar                    0                   2df6bb8a249f2       csi-hostpathplugin-x5hj6                   kube-system
	e4e2013e7e709       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             2 minutes ago        Running             controller                               0                   0a6436af92a21       ingress-nginx-controller-6c8bf45fb-7h5rb   ingress-nginx
	2aa48bdedb241       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            2 minutes ago        Running             gadget                                   0                   193349b1d36b6       gadget-8ktw6                               gadget
	b28acd3bc252a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   b38a963d885d0       registry-proxy-d7n5r                       kube-system
	1dad0dc022510       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   25af72303bf3e       nvidia-device-plugin-daemonset-gpckr       kube-system
	7e6ab6bbbad33       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   4e17f514a68c7       snapshot-controller-7d9fbc56b8-nzqtx       kube-system
	2ee9d403c718a       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   4bd83b91087d4       snapshot-controller-7d9fbc56b8-lg5vk       kube-system
	d235bae133495       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   211b1f6a7960a       amd-gpu-device-plugin-kblb2                kube-system
	dd2a1ddd16307       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   b650eeb7b6a95       csi-hostpath-attacher-0                    kube-system
	b0daa49120f4c       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   6ee9cc074ad55       local-path-provisioner-648f6765c9-5n4rs    local-path-storage
	08fe42979fddb       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   eaa71e68f3756       csi-hostpath-resizer-0                     kube-system
	79ffbf10d4d6a       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   2df6bb8a249f2       csi-hostpathplugin-x5hj6                   kube-system
	b21b334597fd7       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago        Running             yakd                                     0                   4671a413d1e60       yakd-dashboard-5ff678cb9-nkjk8             yakd-dashboard
	0a5bc6342e0fa       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago        Running             registry                                 0                   2c185b03d5f06       registry-6b586f9694-wsdqp                  kube-system
	a32a551446fdd       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             3 minutes ago        Exited              patch                                    1                   9ecf1e3c45cc0       ingress-nginx-admission-patch-klnc2        ingress-nginx
	268f49610f6f9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago        Exited              create                                   0                   3d1b80cad1630       ingress-nginx-admission-create-bkb7d       ingress-nginx
	443f71f01193e       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago        Running             cloud-spanner-emulator                   0                   526627ef47ab1       cloud-spanner-emulator-5bdddb765-8hk6l     default
	125a62d8c60a9       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   5a21cd27baca0       metrics-server-85b7d694d7-jnsx9            kube-system
	f043948674122       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   4580123f57f82       kube-ingress-dns-minikube                  kube-system
	c09a0b77cbea1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago        Running             storage-provisioner                      0                   75dbd1365fd8f       storage-provisioner                        kube-system
	c7ac4b9dcfe98       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago        Running             coredns                                  0                   7a926a976a790       coredns-66bc5c9577-tphvv                   kube-system
	d9470261de6e4       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             3 minutes ago        Running             kube-proxy                               0                   4f19d3960bb7c       kube-proxy-j7cvz                           kube-system
	4cd369ec2d01e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             3 minutes ago        Running             kindnet-cni                              0                   b2733beb80ca1       kindnet-r872z                              kube-system
	2f96412fe3f9d       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             3 minutes ago        Running             kube-controller-manager                  0                   cb6682bad036b       kube-controller-manager-addons-746247      kube-system
	070b82a22d636       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             3 minutes ago        Running             kube-apiserver                           0                   e1fa5fbe7aec3       kube-apiserver-addons-746247               kube-system
	bbb24b899c6b3       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             3 minutes ago        Running             etcd                                     0                   1b31ac2f693d1       etcd-addons-746247                         kube-system
	cb318a4f62348       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             3 minutes ago        Running             kube-scheduler                           0                   485463c20fc46       kube-scheduler-addons-746247               kube-system
	
	
	==> coredns [c7ac4b9dcfe980e1f0ca5380837549fae2f8f4737f218aa46ee31003340f1f0e] <==
	[INFO] 10.244.0.22:44865 - 37183 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000086393s
	[INFO] 10.244.0.22:43591 - 65010 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.005267394s
	[INFO] 10.244.0.22:37649 - 56 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.005332624s
	[INFO] 10.244.0.22:36465 - 57557 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004404219s
	[INFO] 10.244.0.22:48320 - 46661 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004754858s
	[INFO] 10.244.0.22:58183 - 11908 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003985172s
	[INFO] 10.244.0.22:59845 - 29266 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004341968s
	[INFO] 10.244.0.22:37700 - 55173 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000896281s
	[INFO] 10.244.0.22:35759 - 49979 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.002063482s
	[INFO] 10.244.0.26:35924 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000279513s
	[INFO] 10.244.0.26:41636 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000175333s
	[INFO] 10.244.0.31:60754 - 34370 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000228184s
	[INFO] 10.244.0.31:57748 - 35714 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000313308s
	[INFO] 10.244.0.31:49022 - 35188 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000146142s
	[INFO] 10.244.0.31:49176 - 16334 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000178716s
	[INFO] 10.244.0.31:36267 - 52121 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000129132s
	[INFO] 10.244.0.31:46993 - 50227 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000189881s
	[INFO] 10.244.0.31:55721 - 32061 "AAAA IN accounts.google.com.europe-west2-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.004773667s
	[INFO] 10.244.0.31:35378 - 52749 "A IN accounts.google.com.europe-west2-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.004832795s
	[INFO] 10.244.0.31:36104 - 41341 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.003860156s
	[INFO] 10.244.0.31:50583 - 64538 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004769389s
	[INFO] 10.244.0.31:37683 - 5362 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004412208s
	[INFO] 10.244.0.31:53870 - 54606 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004505995s
	[INFO] 10.244.0.31:34994 - 32936 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001804427s
	[INFO] 10.244.0.31:39024 - 43348 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001827507s
	
	
	==> describe nodes <==
	Name:               addons-746247
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-746247
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=addons-746247
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T22_55_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-746247
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-746247"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 22:55:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-746247
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 22:59:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 22:58:47 +0000   Sun, 07 Dec 2025 22:55:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 22:58:47 +0000   Sun, 07 Dec 2025 22:55:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 22:58:47 +0000   Sun, 07 Dec 2025 22:55:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 22:58:47 +0000   Sun, 07 Dec 2025 22:56:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-746247
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                e3c766b8-0955-4cdf-b1a6-92b0d064495c
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m42s
	  default                     cloud-spanner-emulator-5bdddb765-8hk6l      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  default                     hello-world-app-5d498dc89-p8mwn             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  gadget                      gadget-8ktw6                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  gcp-auth                    gcp-auth-78565c9fb4-x8dr5                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-7h5rb    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m44s
	  kube-system                 amd-gpu-device-plugin-kblb2                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 coredns-66bc5c9577-tphvv                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m45s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 csi-hostpathplugin-x5hj6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 etcd-addons-746247                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m51s
	  kube-system                 kindnet-r872z                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m45s
	  kube-system                 kube-apiserver-addons-746247                250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 kube-controller-manager-addons-746247       200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 kube-proxy-j7cvz                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 kube-scheduler-addons-746247                100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 metrics-server-85b7d694d7-jnsx9             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m45s
	  kube-system                 nvidia-device-plugin-daemonset-gpckr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 registry-6b586f9694-wsdqp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 registry-creds-764b6fb674-vl9gn             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 registry-proxy-d7n5r                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 snapshot-controller-7d9fbc56b8-lg5vk        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 snapshot-controller-7d9fbc56b8-nzqtx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  local-path-storage          local-path-provisioner-648f6765c9-5n4rs     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-nkjk8              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     3m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m44s                  kube-proxy       
	  Normal  Starting                 3m56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m56s (x8 over 3m56s)  kubelet          Node addons-746247 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s (x8 over 3m56s)  kubelet          Node addons-746247 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s (x8 over 3m56s)  kubelet          Node addons-746247 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m51s                  kubelet          Node addons-746247 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s                  kubelet          Node addons-746247 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s                  kubelet          Node addons-746247 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m46s                  node-controller  Node addons-746247 event: Registered Node addons-746247 in Controller
	  Normal  NodeReady                3m34s                  kubelet          Node addons-746247 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ee 0d 03 dc f4 50 08 06
	[  +0.000377] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea f6 93 38 ff e7 08 06
	[Dec 7 22:57] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa 20 20 0a 65 2f c6 a0 ab fc 71 65 08 00
	[  +1.031211] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa 20 20 0a 65 2f c6 a0 ab fc 71 65 08 00
	[  +1.024919] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: fa 20 20 0a 65 2f c6 a0 ab fc 71 65 08 00
	[  +1.022918] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: fa 20 20 0a 65 2f c6 a0 ab fc 71 65 08 00
	[  +1.023924] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: fa 20 20 0a 65 2f c6 a0 ab fc 71 65 08 00
	[  +1.023889] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa 20 20 0a 65 2f c6 a0 ab fc 71 65 08 00
	[  +2.047806] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: fa 20 20 0a 65 2f c6 a0 ab fc 71 65 08 00
	[  +4.032636] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa 20 20 0a 65 2f c6 a0 ab fc 71 65 08 00
	[  +8.446393] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa 20 20 0a 65 2f c6 a0 ab fc 71 65 08 00
	[ +16.382716] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa 20 20 0a 65 2f c6 a0 ab fc 71 65 08 00
	[Dec 7 22:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa 20 20 0a 65 2f c6 a0 ab fc 71 65 08 00
	
	
	==> etcd [bbb24b899c6b3630a13d72e60f393052186f583f097e132d0109458022915856] <==
	{"level":"warn","ts":"2025-12-07T22:55:40.364525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.371171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.387459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.393955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.401535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.407913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.415531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.422260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.432137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.438509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.445139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.451769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.458876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.467465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.479578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.487473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.494445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.543462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:51.077860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:51.091590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:56:18.309570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:56:18.317996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:56:18.336534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53416","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T22:56:49.284966Z","caller":"traceutil/trace.go:172","msg":"trace[1814580813] transaction","detail":"{read_only:false; response_revision:1243; number_of_response:1; }","duration":"112.948708ms","start":"2025-12-07T22:56:49.171999Z","end":"2025-12-07T22:56:49.284947Z","steps":["trace[1814580813] 'process raft request'  (duration: 112.908673ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-07T22:56:49.284977Z","caller":"traceutil/trace.go:172","msg":"trace[1255087046] transaction","detail":"{read_only:false; response_revision:1242; number_of_response:1; }","duration":"113.294291ms","start":"2025-12-07T22:56:49.171664Z","end":"2025-12-07T22:56:49.284958Z","steps":["trace[1255087046] 'process raft request'  (duration: 113.165448ms)"],"step_count":1}
	
	
	==> gcp-auth [0cc657ef96c6a06ba798c4256933d459aefef6c133966e15175ec4d9bc8c814b] <==
	2025/12/07 22:56:42 GCP Auth Webhook started!
	2025/12/07 22:56:49 Ready to marshal response ...
	2025/12/07 22:56:49 Ready to write response ...
	2025/12/07 22:56:52 Ready to marshal response ...
	2025/12/07 22:56:52 Ready to write response ...
	2025/12/07 22:56:52 Ready to marshal response ...
	2025/12/07 22:56:52 Ready to write response ...
	2025/12/07 22:57:08 Ready to marshal response ...
	2025/12/07 22:57:08 Ready to write response ...
	2025/12/07 22:57:09 Ready to marshal response ...
	2025/12/07 22:57:09 Ready to write response ...
	2025/12/07 22:57:09 Ready to marshal response ...
	2025/12/07 22:57:09 Ready to write response ...
	2025/12/07 22:57:12 Ready to marshal response ...
	2025/12/07 22:57:12 Ready to write response ...
	2025/12/07 22:57:18 Ready to marshal response ...
	2025/12/07 22:57:18 Ready to write response ...
	2025/12/07 22:57:19 Ready to marshal response ...
	2025/12/07 22:57:19 Ready to write response ...
	2025/12/07 22:57:50 Ready to marshal response ...
	2025/12/07 22:57:50 Ready to write response ...
	2025/12/07 22:59:33 Ready to marshal response ...
	2025/12/07 22:59:33 Ready to write response ...
	
	
	==> kernel <==
	 22:59:34 up  1:41,  0 user,  load average: 0.32, 1.30, 1.98
	Linux addons-746247 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4cd369ec2d01ec0d2cfe7dfec0cafb653f048fc6c10abf58dfc2c354f5a55a1e] <==
	I1207 22:57:30.381941       1 main.go:301] handling current node
	I1207 22:57:40.386305       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:57:40.386375       1 main.go:301] handling current node
	I1207 22:57:50.382198       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:57:50.382231       1 main.go:301] handling current node
	I1207 22:58:00.382165       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:58:00.382218       1 main.go:301] handling current node
	I1207 22:58:10.381913       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:58:10.381952       1 main.go:301] handling current node
	I1207 22:58:20.382381       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:58:20.382436       1 main.go:301] handling current node
	I1207 22:58:30.383000       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:58:30.383034       1 main.go:301] handling current node
	I1207 22:58:40.389450       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:58:40.389483       1 main.go:301] handling current node
	I1207 22:58:50.382371       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:58:50.382432       1 main.go:301] handling current node
	I1207 22:59:00.389896       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:59:00.389939       1 main.go:301] handling current node
	I1207 22:59:10.383547       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:59:10.383581       1 main.go:301] handling current node
	I1207 22:59:20.383780       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:59:20.383822       1 main.go:301] handling current node
	I1207 22:59:30.390518       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:59:30.390551       1 main.go:301] handling current node
	
	
	==> kube-apiserver [070b82a22d636912841f98c83010d7d2b8a760e29cb7bd78b694310c4e09a191] <==
	E1207 22:56:10.369239       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1207 22:56:10.369266       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 22:56:10.369354       1 handler_proxy.go:99] no RequestInfo found in the context
	E1207 22:56:10.369378       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1207 22:56:10.370388       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 22:56:14.403687       1 handler_proxy.go:99] no RequestInfo found in the context
	E1207 22:56:14.403758       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1207 22:56:14.403806       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.247.68:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.247.68:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1207 22:56:14.412707       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1207 22:56:18.309511       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1207 22:56:18.318010       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1207 22:56:18.329785       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1207 22:56:18.336492       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1207 22:57:02.351590       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59272: use of closed network connection
	E1207 22:57:02.509214       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59292: use of closed network connection
	I1207 22:57:08.323202       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1207 22:57:08.501982       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.165.155"}
	I1207 22:57:29.518232       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1207 22:59:33.358831       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.90.39"}
	
	
	==> kube-controller-manager [2f96412fe3f9d0a7efea60c8dc6942a2a0b32d17e4b7caa468ec8aaad5361efb] <==
	I1207 22:55:48.290753       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1207 22:55:48.290761       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1207 22:55:48.290835       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1207 22:55:48.290845       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1207 22:55:48.290845       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1207 22:55:48.290861       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1207 22:55:48.290883       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1207 22:55:48.291619       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1207 22:55:48.291654       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1207 22:55:48.294940       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 22:55:48.299196       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1207 22:55:48.304522       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1207 22:55:48.304527       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1207 22:55:48.304607       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1207 22:55:48.304636       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1207 22:55:48.304640       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1207 22:55:48.304645       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1207 22:55:48.311023       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-746247" podCIDRs=["10.244.0.0/24"]
	I1207 22:55:48.311949       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1207 22:56:03.263964       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1207 22:56:18.300565       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1207 22:56:18.300634       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1207 22:56:18.313197       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1207 22:56:18.401423       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 22:56:18.414089       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [d9470261de6e4a9958176fb20e77f6052bc581ef6fa6b17b1c7111575d256855] <==
	I1207 22:55:49.965166       1 server_linux.go:53] "Using iptables proxy"
	I1207 22:55:50.049680       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 22:55:50.150552       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 22:55:50.150589       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:55:50.150685       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:55:50.178560       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:55:50.178643       1 server_linux.go:132] "Using iptables Proxier"
	I1207 22:55:50.187540       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:55:50.188065       1 server.go:527] "Version info" version="v1.34.2"
	I1207 22:55:50.188106       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:55:50.189394       1 config.go:200] "Starting service config controller"
	I1207 22:55:50.189421       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:55:50.189442       1 config.go:309] "Starting node config controller"
	I1207 22:55:50.189447       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:55:50.189881       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:55:50.189917       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:55:50.189889       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:55:50.189976       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:55:50.289675       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:55:50.289812       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 22:55:50.292895       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 22:55:50.292911       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [cb318a4f623488d2891c4bf7dee2a7de142b6991456ad8b7f7dbeb036b386a2c] <==
	E1207 22:55:40.962641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1207 22:55:40.975891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 22:55:40.976269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1207 22:55:40.976352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1207 22:55:40.976418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1207 22:55:40.976485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1207 22:55:40.976553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1207 22:55:40.976600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 22:55:40.976646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1207 22:55:40.976773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1207 22:55:40.976813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1207 22:55:40.976853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1207 22:55:40.976985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 22:55:41.809698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1207 22:55:41.874465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1207 22:55:41.880508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1207 22:55:41.887543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 22:55:41.896836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1207 22:55:42.058879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1207 22:55:42.063881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1207 22:55:42.120025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1207 22:55:42.156121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1207 22:55:42.166386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 22:55:42.178507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1207 22:55:44.957718       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 07 22:57:51 addons-746247 kubelet[1278]: I1207 22:57:51.859497    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=0.720283994 podStartE2EDuration="1.859473287s" podCreationTimestamp="2025-12-07 22:57:50 +0000 UTC" firstStartedPulling="2025-12-07 22:57:50.566294836 +0000 UTC m=+127.418358960" lastFinishedPulling="2025-12-07 22:57:51.70548413 +0000 UTC m=+128.557548253" observedRunningTime="2025-12-07 22:57:51.857341084 +0000 UTC m=+128.709405211" watchObservedRunningTime="2025-12-07 22:57:51.859473287 +0000 UTC m=+128.711537431"
	Dec 07 22:57:58 addons-746247 kubelet[1278]: I1207 22:57:58.501575    1278 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ff8cbc55-d7cb-4b9a-9517-b29858bf712f-gcp-creds\") pod \"ff8cbc55-d7cb-4b9a-9517-b29858bf712f\" (UID: \"ff8cbc55-d7cb-4b9a-9517-b29858bf712f\") "
	Dec 07 22:57:58 addons-746247 kubelet[1278]: I1207 22:57:58.501686    1278 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc2vs\" (UniqueName: \"kubernetes.io/projected/ff8cbc55-d7cb-4b9a-9517-b29858bf712f-kube-api-access-cc2vs\") pod \"ff8cbc55-d7cb-4b9a-9517-b29858bf712f\" (UID: \"ff8cbc55-d7cb-4b9a-9517-b29858bf712f\") "
	Dec 07 22:57:58 addons-746247 kubelet[1278]: I1207 22:57:58.501691    1278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff8cbc55-d7cb-4b9a-9517-b29858bf712f-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "ff8cbc55-d7cb-4b9a-9517-b29858bf712f" (UID: "ff8cbc55-d7cb-4b9a-9517-b29858bf712f"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 07 22:57:58 addons-746247 kubelet[1278]: I1207 22:57:58.501846    1278 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^276a639e-d3c0-11f0-8bf5-12b410667e10\") pod \"ff8cbc55-d7cb-4b9a-9517-b29858bf712f\" (UID: \"ff8cbc55-d7cb-4b9a-9517-b29858bf712f\") "
	Dec 07 22:57:58 addons-746247 kubelet[1278]: I1207 22:57:58.502050    1278 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ff8cbc55-d7cb-4b9a-9517-b29858bf712f-gcp-creds\") on node \"addons-746247\" DevicePath \"\""
	Dec 07 22:57:58 addons-746247 kubelet[1278]: I1207 22:57:58.504443    1278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff8cbc55-d7cb-4b9a-9517-b29858bf712f-kube-api-access-cc2vs" (OuterVolumeSpecName: "kube-api-access-cc2vs") pod "ff8cbc55-d7cb-4b9a-9517-b29858bf712f" (UID: "ff8cbc55-d7cb-4b9a-9517-b29858bf712f"). InnerVolumeSpecName "kube-api-access-cc2vs". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 07 22:57:58 addons-746247 kubelet[1278]: I1207 22:57:58.505580    1278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^276a639e-d3c0-11f0-8bf5-12b410667e10" (OuterVolumeSpecName: "task-pv-storage") pod "ff8cbc55-d7cb-4b9a-9517-b29858bf712f" (UID: "ff8cbc55-d7cb-4b9a-9517-b29858bf712f"). InnerVolumeSpecName "pvc-8515622a-f084-4145-9084-2c07635e0002". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 07 22:57:58 addons-746247 kubelet[1278]: I1207 22:57:58.603211    1278 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cc2vs\" (UniqueName: \"kubernetes.io/projected/ff8cbc55-d7cb-4b9a-9517-b29858bf712f-kube-api-access-cc2vs\") on node \"addons-746247\" DevicePath \"\""
	Dec 07 22:57:58 addons-746247 kubelet[1278]: I1207 22:57:58.603275    1278 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-8515622a-f084-4145-9084-2c07635e0002\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^276a639e-d3c0-11f0-8bf5-12b410667e10\") on node \"addons-746247\" "
	Dec 07 22:57:58 addons-746247 kubelet[1278]: I1207 22:57:58.607822    1278 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-8515622a-f084-4145-9084-2c07635e0002" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^276a639e-d3c0-11f0-8bf5-12b410667e10") on node "addons-746247"
	Dec 07 22:57:58 addons-746247 kubelet[1278]: I1207 22:57:58.704268    1278 reconciler_common.go:299] "Volume detached for volume \"pvc-8515622a-f084-4145-9084-2c07635e0002\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^276a639e-d3c0-11f0-8bf5-12b410667e10\") on node \"addons-746247\" DevicePath \"\""
	Dec 07 22:57:58 addons-746247 kubelet[1278]: I1207 22:57:58.877935    1278 scope.go:117] "RemoveContainer" containerID="9873d8faf60603c5a9923211454581ded16bfc06c36a1ebd352c9a49c342b26b"
	Dec 07 22:57:58 addons-746247 kubelet[1278]: I1207 22:57:58.887528    1278 scope.go:117] "RemoveContainer" containerID="9873d8faf60603c5a9923211454581ded16bfc06c36a1ebd352c9a49c342b26b"
	Dec 07 22:57:58 addons-746247 kubelet[1278]: E1207 22:57:58.887955    1278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9873d8faf60603c5a9923211454581ded16bfc06c36a1ebd352c9a49c342b26b\": container with ID starting with 9873d8faf60603c5a9923211454581ded16bfc06c36a1ebd352c9a49c342b26b not found: ID does not exist" containerID="9873d8faf60603c5a9923211454581ded16bfc06c36a1ebd352c9a49c342b26b"
	Dec 07 22:57:58 addons-746247 kubelet[1278]: I1207 22:57:58.888000    1278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9873d8faf60603c5a9923211454581ded16bfc06c36a1ebd352c9a49c342b26b"} err="failed to get container status \"9873d8faf60603c5a9923211454581ded16bfc06c36a1ebd352c9a49c342b26b\": rpc error: code = NotFound desc = could not find container \"9873d8faf60603c5a9923211454581ded16bfc06c36a1ebd352c9a49c342b26b\": container with ID starting with 9873d8faf60603c5a9923211454581ded16bfc06c36a1ebd352c9a49c342b26b not found: ID does not exist"
	Dec 07 22:57:59 addons-746247 kubelet[1278]: I1207 22:57:59.235602    1278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff8cbc55-d7cb-4b9a-9517-b29858bf712f" path="/var/lib/kubelet/pods/ff8cbc55-d7cb-4b9a-9517-b29858bf712f/volumes"
	Dec 07 22:58:03 addons-746247 kubelet[1278]: E1207 22:58:03.793496    1278 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-vl9gn" podUID="fd8e6cfd-a85b-4980-b193-cf4b6f8bc5b4"
	Dec 07 22:58:18 addons-746247 kubelet[1278]: I1207 22:58:18.969623    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-vl9gn" podStartSLOduration=148.438281062 podStartE2EDuration="2m29.969595341s" podCreationTimestamp="2025-12-07 22:55:49 +0000 UTC" firstStartedPulling="2025-12-07 22:58:17.254889631 +0000 UTC m=+154.106953758" lastFinishedPulling="2025-12-07 22:58:18.786203902 +0000 UTC m=+155.638268037" observedRunningTime="2025-12-07 22:58:18.968115819 +0000 UTC m=+155.820179986" watchObservedRunningTime="2025-12-07 22:58:18.969595341 +0000 UTC m=+155.821659485"
	Dec 07 22:58:29 addons-746247 kubelet[1278]: I1207 22:58:29.231947    1278 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-tphvv" secret="" err="secret \"gcp-auth\" not found"
	Dec 07 22:58:44 addons-746247 kubelet[1278]: I1207 22:58:44.231821    1278 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-kblb2" secret="" err="secret \"gcp-auth\" not found"
	Dec 07 22:58:54 addons-746247 kubelet[1278]: I1207 22:58:54.231790    1278 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-gpckr" secret="" err="secret \"gcp-auth\" not found"
	Dec 07 22:59:18 addons-746247 kubelet[1278]: I1207 22:59:18.232471    1278 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-d7n5r" secret="" err="secret \"gcp-auth\" not found"
	Dec 07 22:59:33 addons-746247 kubelet[1278]: I1207 22:59:33.326132    1278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmggz\" (UniqueName: \"kubernetes.io/projected/548c8faa-c351-41bd-bb64-07106d611afa-kube-api-access-jmggz\") pod \"hello-world-app-5d498dc89-p8mwn\" (UID: \"548c8faa-c351-41bd-bb64-07106d611afa\") " pod="default/hello-world-app-5d498dc89-p8mwn"
	Dec 07 22:59:33 addons-746247 kubelet[1278]: I1207 22:59:33.326204    1278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/548c8faa-c351-41bd-bb64-07106d611afa-gcp-creds\") pod \"hello-world-app-5d498dc89-p8mwn\" (UID: \"548c8faa-c351-41bd-bb64-07106d611afa\") " pod="default/hello-world-app-5d498dc89-p8mwn"
	
	
	==> storage-provisioner [c09a0b77cbea1a4048f11ca0f248eaeb2aceb4d39363d2dda5f5e7c8d69b2bac] <==
	W1207 22:59:10.079732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:12.083473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:12.087200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:14.090209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:14.094443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:16.097705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:16.101994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:18.105360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:18.109700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:20.112971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:20.116812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:22.120127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:22.125202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:24.127839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:24.132034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:26.134974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:26.138652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:28.142125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:28.148055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:30.150794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:30.154431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:32.158131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:32.162827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:34.166866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:59:34.174577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-746247 -n addons-746247
helpers_test.go:269: (dbg) Run:  kubectl --context addons-746247 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-bkb7d ingress-nginx-admission-patch-klnc2
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-746247 describe pod ingress-nginx-admission-create-bkb7d ingress-nginx-admission-patch-klnc2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-746247 describe pod ingress-nginx-admission-create-bkb7d ingress-nginx-admission-patch-klnc2: exit status 1 (59.780891ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bkb7d" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-klnc2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-746247 describe pod ingress-nginx-admission-create-bkb7d ingress-nginx-admission-patch-klnc2: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-746247 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-746247 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (254.817878ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 22:59:35.729203  409218 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:59:35.729503  409218 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:59:35.729514  409218 out.go:374] Setting ErrFile to fd 2...
	I1207 22:59:35.729519  409218 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:59:35.729705  409218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 22:59:35.730066  409218 mustload.go:66] Loading cluster: addons-746247
	I1207 22:59:35.730465  409218 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:59:35.730491  409218 addons.go:622] checking whether the cluster is paused
	I1207 22:59:35.730591  409218 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:59:35.730608  409218 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:59:35.731003  409218 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:59:35.750347  409218 ssh_runner.go:195] Run: systemctl --version
	I1207 22:59:35.750416  409218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:59:35.769220  409218 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:59:35.867311  409218 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 22:59:35.867415  409218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 22:59:35.899089  409218 cri.go:89] found id: "1bbc56671742a86191f61a7de2faeeacf791ee3faf44489c27fa25223162165a"
	I1207 22:59:35.899110  409218 cri.go:89] found id: "15d6c69879b1c9d09b82e4b5031bbbf34135b1f9bd979dea9f9f0f72f6fd51c8"
	I1207 22:59:35.899114  409218 cri.go:89] found id: "5fb12f5f4df2a1240cc8c210ab01b8888c98b0e557e9f3cc7ca744b1cea7d969"
	I1207 22:59:35.899117  409218 cri.go:89] found id: "fe56a017640b65af58831a24e810c5770fc372ade72500a7ef5cde7d37f3ff2a"
	I1207 22:59:35.899120  409218 cri.go:89] found id: "504d8b39e428bcf1fba0674f9f798df8c411b5d88014118f294c3efb546d0697"
	I1207 22:59:35.899123  409218 cri.go:89] found id: "50ad042517d0afe511c861b3ef18e6f89845648a1770b53fd53f3cc495f5a87e"
	I1207 22:59:35.899126  409218 cri.go:89] found id: "b28acd3bc252ae2090058f6c5f790414100d389c691000c749b4cc4ffeaaa79b"
	I1207 22:59:35.899129  409218 cri.go:89] found id: "1dad0dc0225103ed53f3ee4143c3ceff2347afd54237a96641893e36d40210f3"
	I1207 22:59:35.899132  409218 cri.go:89] found id: "7e6ab6bbbad333b2ff082b8ea3bab7762ffc7ef0c2ab04730063a59583be7141"
	I1207 22:59:35.899138  409218 cri.go:89] found id: "2ee9d403c718ad1071a4191fc7909302e0c5c99a980da0841bc028a064062feb"
	I1207 22:59:35.899140  409218 cri.go:89] found id: "d235bae133495f0f39c9d96866f02fe9e69074a4fa3760b3ca2223c3c55f1fdc"
	I1207 22:59:35.899143  409218 cri.go:89] found id: "dd2a1ddd16307b90c23b79922c3c697d8af8058539cc18dde5ec83dbb37624e5"
	I1207 22:59:35.899146  409218 cri.go:89] found id: "08fe42979fddbd1da206b7da0fd7f120a51c3544d5765bb4437a2b3a850217cf"
	I1207 22:59:35.899149  409218 cri.go:89] found id: "79ffbf10d4d6ab250715b396039a119ab1754f8e92841abc0705ff75b50dddad"
	I1207 22:59:35.899151  409218 cri.go:89] found id: "0a5bc6342e0fa615eb4b4c3ff68c6b411b7597a99b09c0ddfbad42f794634308"
	I1207 22:59:35.899157  409218 cri.go:89] found id: "125a62d8c60a9ec08a22d06c8690567a309e13fd8ede4423ac18b3684ed3a1eb"
	I1207 22:59:35.899160  409218 cri.go:89] found id: "f0439486741224d12b7d1a01f1b4080435a3b8ef6cee51988784ad3f75baa93a"
	I1207 22:59:35.899164  409218 cri.go:89] found id: "c09a0b77cbea1a4048f11ca0f248eaeb2aceb4d39363d2dda5f5e7c8d69b2bac"
	I1207 22:59:35.899167  409218 cri.go:89] found id: "c7ac4b9dcfe980e1f0ca5380837549fae2f8f4737f218aa46ee31003340f1f0e"
	I1207 22:59:35.899170  409218 cri.go:89] found id: "d9470261de6e4a9958176fb20e77f6052bc581ef6fa6b17b1c7111575d256855"
	I1207 22:59:35.899174  409218 cri.go:89] found id: "4cd369ec2d01ec0d2cfe7dfec0cafb653f048fc6c10abf58dfc2c354f5a55a1e"
	I1207 22:59:35.899177  409218 cri.go:89] found id: "2f96412fe3f9d0a7efea60c8dc6942a2a0b32d17e4b7caa468ec8aaad5361efb"
	I1207 22:59:35.899180  409218 cri.go:89] found id: "070b82a22d636912841f98c83010d7d2b8a760e29cb7bd78b694310c4e09a191"
	I1207 22:59:35.899182  409218 cri.go:89] found id: "bbb24b899c6b3630a13d72e60f393052186f583f097e132d0109458022915856"
	I1207 22:59:35.899185  409218 cri.go:89] found id: "cb318a4f623488d2891c4bf7dee2a7de142b6991456ad8b7f7dbeb036b386a2c"
	I1207 22:59:35.899188  409218 cri.go:89] found id: ""
	I1207 22:59:35.899225  409218 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 22:59:35.913584  409218 out.go:203] 
	W1207 22:59:35.914760  409218 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:59:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:59:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1207 22:59:35.914785  409218 out.go:285] * 
	* 
	W1207 22:59:35.919498  409218 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 22:59:35.920819  409218 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-746247 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-746247 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-746247 addons disable ingress --alsologtostderr -v=1: exit status 11 (244.101886ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 22:59:35.983988  409306 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:59:35.984091  409306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:59:35.984099  409306 out.go:374] Setting ErrFile to fd 2...
	I1207 22:59:35.984104  409306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:59:35.984314  409306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 22:59:35.984588  409306 mustload.go:66] Loading cluster: addons-746247
	I1207 22:59:35.984894  409306 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:59:35.984917  409306 addons.go:622] checking whether the cluster is paused
	I1207 22:59:35.984997  409306 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:59:35.985009  409306 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:59:35.985418  409306 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:59:36.003755  409306 ssh_runner.go:195] Run: systemctl --version
	I1207 22:59:36.003811  409306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:59:36.022384  409306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:59:36.115147  409306 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 22:59:36.115236  409306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 22:59:36.144214  409306 cri.go:89] found id: "1bbc56671742a86191f61a7de2faeeacf791ee3faf44489c27fa25223162165a"
	I1207 22:59:36.144241  409306 cri.go:89] found id: "15d6c69879b1c9d09b82e4b5031bbbf34135b1f9bd979dea9f9f0f72f6fd51c8"
	I1207 22:59:36.144248  409306 cri.go:89] found id: "5fb12f5f4df2a1240cc8c210ab01b8888c98b0e557e9f3cc7ca744b1cea7d969"
	I1207 22:59:36.144254  409306 cri.go:89] found id: "fe56a017640b65af58831a24e810c5770fc372ade72500a7ef5cde7d37f3ff2a"
	I1207 22:59:36.144259  409306 cri.go:89] found id: "504d8b39e428bcf1fba0674f9f798df8c411b5d88014118f294c3efb546d0697"
	I1207 22:59:36.144265  409306 cri.go:89] found id: "50ad042517d0afe511c861b3ef18e6f89845648a1770b53fd53f3cc495f5a87e"
	I1207 22:59:36.144270  409306 cri.go:89] found id: "b28acd3bc252ae2090058f6c5f790414100d389c691000c749b4cc4ffeaaa79b"
	I1207 22:59:36.144274  409306 cri.go:89] found id: "1dad0dc0225103ed53f3ee4143c3ceff2347afd54237a96641893e36d40210f3"
	I1207 22:59:36.144278  409306 cri.go:89] found id: "7e6ab6bbbad333b2ff082b8ea3bab7762ffc7ef0c2ab04730063a59583be7141"
	I1207 22:59:36.144287  409306 cri.go:89] found id: "2ee9d403c718ad1071a4191fc7909302e0c5c99a980da0841bc028a064062feb"
	I1207 22:59:36.144292  409306 cri.go:89] found id: "d235bae133495f0f39c9d96866f02fe9e69074a4fa3760b3ca2223c3c55f1fdc"
	I1207 22:59:36.144296  409306 cri.go:89] found id: "dd2a1ddd16307b90c23b79922c3c697d8af8058539cc18dde5ec83dbb37624e5"
	I1207 22:59:36.144304  409306 cri.go:89] found id: "08fe42979fddbd1da206b7da0fd7f120a51c3544d5765bb4437a2b3a850217cf"
	I1207 22:59:36.144308  409306 cri.go:89] found id: "79ffbf10d4d6ab250715b396039a119ab1754f8e92841abc0705ff75b50dddad"
	I1207 22:59:36.144317  409306 cri.go:89] found id: "0a5bc6342e0fa615eb4b4c3ff68c6b411b7597a99b09c0ddfbad42f794634308"
	I1207 22:59:36.144340  409306 cri.go:89] found id: "125a62d8c60a9ec08a22d06c8690567a309e13fd8ede4423ac18b3684ed3a1eb"
	I1207 22:59:36.144350  409306 cri.go:89] found id: "f0439486741224d12b7d1a01f1b4080435a3b8ef6cee51988784ad3f75baa93a"
	I1207 22:59:36.144356  409306 cri.go:89] found id: "c09a0b77cbea1a4048f11ca0f248eaeb2aceb4d39363d2dda5f5e7c8d69b2bac"
	I1207 22:59:36.144361  409306 cri.go:89] found id: "c7ac4b9dcfe980e1f0ca5380837549fae2f8f4737f218aa46ee31003340f1f0e"
	I1207 22:59:36.144366  409306 cri.go:89] found id: "d9470261de6e4a9958176fb20e77f6052bc581ef6fa6b17b1c7111575d256855"
	I1207 22:59:36.144376  409306 cri.go:89] found id: "4cd369ec2d01ec0d2cfe7dfec0cafb653f048fc6c10abf58dfc2c354f5a55a1e"
	I1207 22:59:36.144380  409306 cri.go:89] found id: "2f96412fe3f9d0a7efea60c8dc6942a2a0b32d17e4b7caa468ec8aaad5361efb"
	I1207 22:59:36.144395  409306 cri.go:89] found id: "070b82a22d636912841f98c83010d7d2b8a760e29cb7bd78b694310c4e09a191"
	I1207 22:59:36.144398  409306 cri.go:89] found id: "bbb24b899c6b3630a13d72e60f393052186f583f097e132d0109458022915856"
	I1207 22:59:36.144400  409306 cri.go:89] found id: "cb318a4f623488d2891c4bf7dee2a7de142b6991456ad8b7f7dbeb036b386a2c"
	I1207 22:59:36.144403  409306 cri.go:89] found id: ""
	I1207 22:59:36.144452  409306 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 22:59:36.158680  409306 out.go:203] 
	W1207 22:59:36.159942  409306 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:59:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:59:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1207 22:59:36.159965  409306 out.go:285] * 
	* 
	W1207 22:59:36.164158  409306 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 22:59:36.165445  409306 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-746247 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (148.09s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-8ktw6" [b0406a04-04be-4fa6-8b6b-7a3b21a7659e] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003914221s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-746247 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-746247 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (297.184916ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 22:57:10.382921  404585 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:57:10.383055  404585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:10.383066  404585 out.go:374] Setting ErrFile to fd 2...
	I1207 22:57:10.383073  404585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:10.383589  404585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 22:57:10.384012  404585 mustload.go:66] Loading cluster: addons-746247
	I1207 22:57:10.384514  404585 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:10.384551  404585 addons.go:622] checking whether the cluster is paused
	I1207 22:57:10.384655  404585 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:10.384687  404585 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:57:10.385213  404585 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:57:10.410519  404585 ssh_runner.go:195] Run: systemctl --version
	I1207 22:57:10.410592  404585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:57:10.433957  404585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:57:10.538069  404585 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 22:57:10.538186  404585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 22:57:10.576568  404585 cri.go:89] found id: "15d6c69879b1c9d09b82e4b5031bbbf34135b1f9bd979dea9f9f0f72f6fd51c8"
	I1207 22:57:10.576592  404585 cri.go:89] found id: "5fb12f5f4df2a1240cc8c210ab01b8888c98b0e557e9f3cc7ca744b1cea7d969"
	I1207 22:57:10.576599  404585 cri.go:89] found id: "fe56a017640b65af58831a24e810c5770fc372ade72500a7ef5cde7d37f3ff2a"
	I1207 22:57:10.576603  404585 cri.go:89] found id: "504d8b39e428bcf1fba0674f9f798df8c411b5d88014118f294c3efb546d0697"
	I1207 22:57:10.576607  404585 cri.go:89] found id: "50ad042517d0afe511c861b3ef18e6f89845648a1770b53fd53f3cc495f5a87e"
	I1207 22:57:10.576612  404585 cri.go:89] found id: "b28acd3bc252ae2090058f6c5f790414100d389c691000c749b4cc4ffeaaa79b"
	I1207 22:57:10.576616  404585 cri.go:89] found id: "1dad0dc0225103ed53f3ee4143c3ceff2347afd54237a96641893e36d40210f3"
	I1207 22:57:10.576620  404585 cri.go:89] found id: "7e6ab6bbbad333b2ff082b8ea3bab7762ffc7ef0c2ab04730063a59583be7141"
	I1207 22:57:10.576624  404585 cri.go:89] found id: "2ee9d403c718ad1071a4191fc7909302e0c5c99a980da0841bc028a064062feb"
	I1207 22:57:10.576645  404585 cri.go:89] found id: "d235bae133495f0f39c9d96866f02fe9e69074a4fa3760b3ca2223c3c55f1fdc"
	I1207 22:57:10.576654  404585 cri.go:89] found id: "dd2a1ddd16307b90c23b79922c3c697d8af8058539cc18dde5ec83dbb37624e5"
	I1207 22:57:10.576659  404585 cri.go:89] found id: "08fe42979fddbd1da206b7da0fd7f120a51c3544d5765bb4437a2b3a850217cf"
	I1207 22:57:10.576665  404585 cri.go:89] found id: "79ffbf10d4d6ab250715b396039a119ab1754f8e92841abc0705ff75b50dddad"
	I1207 22:57:10.576671  404585 cri.go:89] found id: "0a5bc6342e0fa615eb4b4c3ff68c6b411b7597a99b09c0ddfbad42f794634308"
	I1207 22:57:10.576679  404585 cri.go:89] found id: "125a62d8c60a9ec08a22d06c8690567a309e13fd8ede4423ac18b3684ed3a1eb"
	I1207 22:57:10.576691  404585 cri.go:89] found id: "f0439486741224d12b7d1a01f1b4080435a3b8ef6cee51988784ad3f75baa93a"
	I1207 22:57:10.576699  404585 cri.go:89] found id: "c09a0b77cbea1a4048f11ca0f248eaeb2aceb4d39363d2dda5f5e7c8d69b2bac"
	I1207 22:57:10.576704  404585 cri.go:89] found id: "c7ac4b9dcfe980e1f0ca5380837549fae2f8f4737f218aa46ee31003340f1f0e"
	I1207 22:57:10.576708  404585 cri.go:89] found id: "d9470261de6e4a9958176fb20e77f6052bc581ef6fa6b17b1c7111575d256855"
	I1207 22:57:10.576712  404585 cri.go:89] found id: "4cd369ec2d01ec0d2cfe7dfec0cafb653f048fc6c10abf58dfc2c354f5a55a1e"
	I1207 22:57:10.576720  404585 cri.go:89] found id: "2f96412fe3f9d0a7efea60c8dc6942a2a0b32d17e4b7caa468ec8aaad5361efb"
	I1207 22:57:10.576731  404585 cri.go:89] found id: "070b82a22d636912841f98c83010d7d2b8a760e29cb7bd78b694310c4e09a191"
	I1207 22:57:10.576736  404585 cri.go:89] found id: "bbb24b899c6b3630a13d72e60f393052186f583f097e132d0109458022915856"
	I1207 22:57:10.576739  404585 cri.go:89] found id: "cb318a4f623488d2891c4bf7dee2a7de142b6991456ad8b7f7dbeb036b386a2c"
	I1207 22:57:10.576744  404585 cri.go:89] found id: ""
	I1207 22:57:10.576801  404585 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 22:57:10.593755  404585 out.go:203] 
	W1207 22:57:10.594932  404585 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1207 22:57:10.594955  404585 out.go:285] * 
	* 
	W1207 22:57:10.600905  404585 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 22:57:10.602142  404585 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-746247 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.363956ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-jnsx9" [2733de69-8b13-43ab-8b4e-a11f01ca6694] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002688995s
addons_test.go:463: (dbg) Run:  kubectl --context addons-746247 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-746247 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-746247 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (245.435669ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 22:57:07.890254  403958 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:57:07.890562  403958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:07.890573  403958 out.go:374] Setting ErrFile to fd 2...
	I1207 22:57:07.890578  403958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:07.890755  403958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 22:57:07.891047  403958 mustload.go:66] Loading cluster: addons-746247
	I1207 22:57:07.891389  403958 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:07.891412  403958 addons.go:622] checking whether the cluster is paused
	I1207 22:57:07.891506  403958 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:07.891524  403958 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:57:07.891966  403958 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:57:07.910712  403958 ssh_runner.go:195] Run: systemctl --version
	I1207 22:57:07.910782  403958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:57:07.929720  403958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:57:08.022308  403958 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 22:57:08.022436  403958 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 22:57:08.052512  403958 cri.go:89] found id: "15d6c69879b1c9d09b82e4b5031bbbf34135b1f9bd979dea9f9f0f72f6fd51c8"
	I1207 22:57:08.052536  403958 cri.go:89] found id: "5fb12f5f4df2a1240cc8c210ab01b8888c98b0e557e9f3cc7ca744b1cea7d969"
	I1207 22:57:08.052541  403958 cri.go:89] found id: "fe56a017640b65af58831a24e810c5770fc372ade72500a7ef5cde7d37f3ff2a"
	I1207 22:57:08.052544  403958 cri.go:89] found id: "504d8b39e428bcf1fba0674f9f798df8c411b5d88014118f294c3efb546d0697"
	I1207 22:57:08.052547  403958 cri.go:89] found id: "50ad042517d0afe511c861b3ef18e6f89845648a1770b53fd53f3cc495f5a87e"
	I1207 22:57:08.052552  403958 cri.go:89] found id: "b28acd3bc252ae2090058f6c5f790414100d389c691000c749b4cc4ffeaaa79b"
	I1207 22:57:08.052555  403958 cri.go:89] found id: "1dad0dc0225103ed53f3ee4143c3ceff2347afd54237a96641893e36d40210f3"
	I1207 22:57:08.052558  403958 cri.go:89] found id: "7e6ab6bbbad333b2ff082b8ea3bab7762ffc7ef0c2ab04730063a59583be7141"
	I1207 22:57:08.052561  403958 cri.go:89] found id: "2ee9d403c718ad1071a4191fc7909302e0c5c99a980da0841bc028a064062feb"
	I1207 22:57:08.052572  403958 cri.go:89] found id: "d235bae133495f0f39c9d96866f02fe9e69074a4fa3760b3ca2223c3c55f1fdc"
	I1207 22:57:08.052575  403958 cri.go:89] found id: "dd2a1ddd16307b90c23b79922c3c697d8af8058539cc18dde5ec83dbb37624e5"
	I1207 22:57:08.052578  403958 cri.go:89] found id: "08fe42979fddbd1da206b7da0fd7f120a51c3544d5765bb4437a2b3a850217cf"
	I1207 22:57:08.052581  403958 cri.go:89] found id: "79ffbf10d4d6ab250715b396039a119ab1754f8e92841abc0705ff75b50dddad"
	I1207 22:57:08.052583  403958 cri.go:89] found id: "0a5bc6342e0fa615eb4b4c3ff68c6b411b7597a99b09c0ddfbad42f794634308"
	I1207 22:57:08.052586  403958 cri.go:89] found id: "125a62d8c60a9ec08a22d06c8690567a309e13fd8ede4423ac18b3684ed3a1eb"
	I1207 22:57:08.052590  403958 cri.go:89] found id: "f0439486741224d12b7d1a01f1b4080435a3b8ef6cee51988784ad3f75baa93a"
	I1207 22:57:08.052593  403958 cri.go:89] found id: "c09a0b77cbea1a4048f11ca0f248eaeb2aceb4d39363d2dda5f5e7c8d69b2bac"
	I1207 22:57:08.052601  403958 cri.go:89] found id: "c7ac4b9dcfe980e1f0ca5380837549fae2f8f4737f218aa46ee31003340f1f0e"
	I1207 22:57:08.052604  403958 cri.go:89] found id: "d9470261de6e4a9958176fb20e77f6052bc581ef6fa6b17b1c7111575d256855"
	I1207 22:57:08.052607  403958 cri.go:89] found id: "4cd369ec2d01ec0d2cfe7dfec0cafb653f048fc6c10abf58dfc2c354f5a55a1e"
	I1207 22:57:08.052610  403958 cri.go:89] found id: "2f96412fe3f9d0a7efea60c8dc6942a2a0b32d17e4b7caa468ec8aaad5361efb"
	I1207 22:57:08.052612  403958 cri.go:89] found id: "070b82a22d636912841f98c83010d7d2b8a760e29cb7bd78b694310c4e09a191"
	I1207 22:57:08.052615  403958 cri.go:89] found id: "bbb24b899c6b3630a13d72e60f393052186f583f097e132d0109458022915856"
	I1207 22:57:08.052617  403958 cri.go:89] found id: "cb318a4f623488d2891c4bf7dee2a7de142b6991456ad8b7f7dbeb036b386a2c"
	I1207 22:57:08.052620  403958 cri.go:89] found id: ""
	I1207 22:57:08.052658  403958 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 22:57:08.067307  403958 out.go:203] 
	W1207 22:57:08.068465  403958 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1207 22:57:08.068493  403958 out.go:285] * 
	* 
	W1207 22:57:08.072485  403958 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 22:57:08.073714  403958 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-746247 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1207 22:57:17.155736  393125 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1207 22:57:17.159131  393125 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1207 22:57:17.159156  393125 kapi.go:107] duration metric: took 3.442689ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.455622ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-746247 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-746247 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [a1173d37-0ffc-44da-9818-ac6f9bf045c4] Pending
helpers_test.go:352: "task-pv-pod" [a1173d37-0ffc-44da-9818-ac6f9bf045c4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [a1173d37-0ffc-44da-9818-ac6f9bf045c4] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003643008s
addons_test.go:572: (dbg) Run:  kubectl --context addons-746247 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-746247 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-746247 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-746247 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-746247 delete pod task-pv-pod: (1.200873102s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-746247 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-746247 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-746247 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [ff8cbc55-d7cb-4b9a-9517-b29858bf712f] Pending
helpers_test.go:352: "task-pv-pod-restore" [ff8cbc55-d7cb-4b9a-9517-b29858bf712f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [ff8cbc55-d7cb-4b9a-9517-b29858bf712f] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003548242s
addons_test.go:614: (dbg) Run:  kubectl --context addons-746247 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-746247 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-746247 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-746247 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-746247 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (246.323752ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 22:57:59.285744  407091 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:57:59.285878  407091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:59.285887  407091 out.go:374] Setting ErrFile to fd 2...
	I1207 22:57:59.285892  407091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:59.286122  407091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 22:57:59.286425  407091 mustload.go:66] Loading cluster: addons-746247
	I1207 22:57:59.286782  407091 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:59.286804  407091 addons.go:622] checking whether the cluster is paused
	I1207 22:57:59.286885  407091 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:59.286902  407091 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:57:59.287249  407091 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:57:59.305793  407091 ssh_runner.go:195] Run: systemctl --version
	I1207 22:57:59.305838  407091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:57:59.323749  407091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:57:59.417195  407091 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 22:57:59.417354  407091 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 22:57:59.446582  407091 cri.go:89] found id: "15d6c69879b1c9d09b82e4b5031bbbf34135b1f9bd979dea9f9f0f72f6fd51c8"
	I1207 22:57:59.446607  407091 cri.go:89] found id: "5fb12f5f4df2a1240cc8c210ab01b8888c98b0e557e9f3cc7ca744b1cea7d969"
	I1207 22:57:59.446613  407091 cri.go:89] found id: "fe56a017640b65af58831a24e810c5770fc372ade72500a7ef5cde7d37f3ff2a"
	I1207 22:57:59.446618  407091 cri.go:89] found id: "504d8b39e428bcf1fba0674f9f798df8c411b5d88014118f294c3efb546d0697"
	I1207 22:57:59.446623  407091 cri.go:89] found id: "50ad042517d0afe511c861b3ef18e6f89845648a1770b53fd53f3cc495f5a87e"
	I1207 22:57:59.446627  407091 cri.go:89] found id: "b28acd3bc252ae2090058f6c5f790414100d389c691000c749b4cc4ffeaaa79b"
	I1207 22:57:59.446631  407091 cri.go:89] found id: "1dad0dc0225103ed53f3ee4143c3ceff2347afd54237a96641893e36d40210f3"
	I1207 22:57:59.446635  407091 cri.go:89] found id: "7e6ab6bbbad333b2ff082b8ea3bab7762ffc7ef0c2ab04730063a59583be7141"
	I1207 22:57:59.446638  407091 cri.go:89] found id: "2ee9d403c718ad1071a4191fc7909302e0c5c99a980da0841bc028a064062feb"
	I1207 22:57:59.446646  407091 cri.go:89] found id: "d235bae133495f0f39c9d96866f02fe9e69074a4fa3760b3ca2223c3c55f1fdc"
	I1207 22:57:59.446651  407091 cri.go:89] found id: "dd2a1ddd16307b90c23b79922c3c697d8af8058539cc18dde5ec83dbb37624e5"
	I1207 22:57:59.446655  407091 cri.go:89] found id: "08fe42979fddbd1da206b7da0fd7f120a51c3544d5765bb4437a2b3a850217cf"
	I1207 22:57:59.446660  407091 cri.go:89] found id: "79ffbf10d4d6ab250715b396039a119ab1754f8e92841abc0705ff75b50dddad"
	I1207 22:57:59.446665  407091 cri.go:89] found id: "0a5bc6342e0fa615eb4b4c3ff68c6b411b7597a99b09c0ddfbad42f794634308"
	I1207 22:57:59.446670  407091 cri.go:89] found id: "125a62d8c60a9ec08a22d06c8690567a309e13fd8ede4423ac18b3684ed3a1eb"
	I1207 22:57:59.446683  407091 cri.go:89] found id: "f0439486741224d12b7d1a01f1b4080435a3b8ef6cee51988784ad3f75baa93a"
	I1207 22:57:59.446691  407091 cri.go:89] found id: "c09a0b77cbea1a4048f11ca0f248eaeb2aceb4d39363d2dda5f5e7c8d69b2bac"
	I1207 22:57:59.446698  407091 cri.go:89] found id: "c7ac4b9dcfe980e1f0ca5380837549fae2f8f4737f218aa46ee31003340f1f0e"
	I1207 22:57:59.446701  407091 cri.go:89] found id: "d9470261de6e4a9958176fb20e77f6052bc581ef6fa6b17b1c7111575d256855"
	I1207 22:57:59.446705  407091 cri.go:89] found id: "4cd369ec2d01ec0d2cfe7dfec0cafb653f048fc6c10abf58dfc2c354f5a55a1e"
	I1207 22:57:59.446714  407091 cri.go:89] found id: "2f96412fe3f9d0a7efea60c8dc6942a2a0b32d17e4b7caa468ec8aaad5361efb"
	I1207 22:57:59.446722  407091 cri.go:89] found id: "070b82a22d636912841f98c83010d7d2b8a760e29cb7bd78b694310c4e09a191"
	I1207 22:57:59.446727  407091 cri.go:89] found id: "bbb24b899c6b3630a13d72e60f393052186f583f097e132d0109458022915856"
	I1207 22:57:59.446733  407091 cri.go:89] found id: "cb318a4f623488d2891c4bf7dee2a7de142b6991456ad8b7f7dbeb036b386a2c"
	I1207 22:57:59.446737  407091 cri.go:89] found id: ""
	I1207 22:57:59.446787  407091 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 22:57:59.461458  407091 out.go:203] 
	W1207 22:57:59.462476  407091 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1207 22:57:59.462498  407091 out.go:285] * 
	* 
	W1207 22:57:59.466442  407091 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 22:57:59.467740  407091 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-746247 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-746247 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-746247 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (250.2271ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 22:57:59.531478  407153 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:57:59.531758  407153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:59.531768  407153 out.go:374] Setting ErrFile to fd 2...
	I1207 22:57:59.531773  407153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:59.531969  407153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 22:57:59.532229  407153 mustload.go:66] Loading cluster: addons-746247
	I1207 22:57:59.532577  407153 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:59.532600  407153 addons.go:622] checking whether the cluster is paused
	I1207 22:57:59.532685  407153 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:59.532703  407153 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:57:59.533098  407153 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:57:59.551059  407153 ssh_runner.go:195] Run: systemctl --version
	I1207 22:57:59.551112  407153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:57:59.570494  407153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:57:59.664856  407153 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 22:57:59.664943  407153 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 22:57:59.696984  407153 cri.go:89] found id: "15d6c69879b1c9d09b82e4b5031bbbf34135b1f9bd979dea9f9f0f72f6fd51c8"
	I1207 22:57:59.697014  407153 cri.go:89] found id: "5fb12f5f4df2a1240cc8c210ab01b8888c98b0e557e9f3cc7ca744b1cea7d969"
	I1207 22:57:59.697023  407153 cri.go:89] found id: "fe56a017640b65af58831a24e810c5770fc372ade72500a7ef5cde7d37f3ff2a"
	I1207 22:57:59.697028  407153 cri.go:89] found id: "504d8b39e428bcf1fba0674f9f798df8c411b5d88014118f294c3efb546d0697"
	I1207 22:57:59.697033  407153 cri.go:89] found id: "50ad042517d0afe511c861b3ef18e6f89845648a1770b53fd53f3cc495f5a87e"
	I1207 22:57:59.697038  407153 cri.go:89] found id: "b28acd3bc252ae2090058f6c5f790414100d389c691000c749b4cc4ffeaaa79b"
	I1207 22:57:59.697042  407153 cri.go:89] found id: "1dad0dc0225103ed53f3ee4143c3ceff2347afd54237a96641893e36d40210f3"
	I1207 22:57:59.697046  407153 cri.go:89] found id: "7e6ab6bbbad333b2ff082b8ea3bab7762ffc7ef0c2ab04730063a59583be7141"
	I1207 22:57:59.697052  407153 cri.go:89] found id: "2ee9d403c718ad1071a4191fc7909302e0c5c99a980da0841bc028a064062feb"
	I1207 22:57:59.697078  407153 cri.go:89] found id: "d235bae133495f0f39c9d96866f02fe9e69074a4fa3760b3ca2223c3c55f1fdc"
	I1207 22:57:59.697088  407153 cri.go:89] found id: "dd2a1ddd16307b90c23b79922c3c697d8af8058539cc18dde5ec83dbb37624e5"
	I1207 22:57:59.697093  407153 cri.go:89] found id: "08fe42979fddbd1da206b7da0fd7f120a51c3544d5765bb4437a2b3a850217cf"
	I1207 22:57:59.697097  407153 cri.go:89] found id: "79ffbf10d4d6ab250715b396039a119ab1754f8e92841abc0705ff75b50dddad"
	I1207 22:57:59.697105  407153 cri.go:89] found id: "0a5bc6342e0fa615eb4b4c3ff68c6b411b7597a99b09c0ddfbad42f794634308"
	I1207 22:57:59.697110  407153 cri.go:89] found id: "125a62d8c60a9ec08a22d06c8690567a309e13fd8ede4423ac18b3684ed3a1eb"
	I1207 22:57:59.697119  407153 cri.go:89] found id: "f0439486741224d12b7d1a01f1b4080435a3b8ef6cee51988784ad3f75baa93a"
	I1207 22:57:59.697122  407153 cri.go:89] found id: "c09a0b77cbea1a4048f11ca0f248eaeb2aceb4d39363d2dda5f5e7c8d69b2bac"
	I1207 22:57:59.697128  407153 cri.go:89] found id: "c7ac4b9dcfe980e1f0ca5380837549fae2f8f4737f218aa46ee31003340f1f0e"
	I1207 22:57:59.697131  407153 cri.go:89] found id: "d9470261de6e4a9958176fb20e77f6052bc581ef6fa6b17b1c7111575d256855"
	I1207 22:57:59.697133  407153 cri.go:89] found id: "4cd369ec2d01ec0d2cfe7dfec0cafb653f048fc6c10abf58dfc2c354f5a55a1e"
	I1207 22:57:59.697136  407153 cri.go:89] found id: "2f96412fe3f9d0a7efea60c8dc6942a2a0b32d17e4b7caa468ec8aaad5361efb"
	I1207 22:57:59.697145  407153 cri.go:89] found id: "070b82a22d636912841f98c83010d7d2b8a760e29cb7bd78b694310c4e09a191"
	I1207 22:57:59.697151  407153 cri.go:89] found id: "bbb24b899c6b3630a13d72e60f393052186f583f097e132d0109458022915856"
	I1207 22:57:59.697154  407153 cri.go:89] found id: "cb318a4f623488d2891c4bf7dee2a7de142b6991456ad8b7f7dbeb036b386a2c"
	I1207 22:57:59.697156  407153 cri.go:89] found id: ""
	I1207 22:57:59.697198  407153 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 22:57:59.712248  407153 out.go:203] 
	W1207 22:57:59.713286  407153 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1207 22:57:59.713302  407153 out.go:285] * 
	* 
	W1207 22:57:59.717354  407153 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 22:57:59.718744  407153 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-746247 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (42.57s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-746247 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-746247 --alsologtostderr -v=1: exit status 11 (248.781257ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 22:57:02.827465  403141 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:57:02.827718  403141 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:02.827727  403141 out.go:374] Setting ErrFile to fd 2...
	I1207 22:57:02.827731  403141 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:02.827930  403141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 22:57:02.828201  403141 mustload.go:66] Loading cluster: addons-746247
	I1207 22:57:02.828568  403141 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:02.828593  403141 addons.go:622] checking whether the cluster is paused
	I1207 22:57:02.828673  403141 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:02.828690  403141 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:57:02.829158  403141 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:57:02.847434  403141 ssh_runner.go:195] Run: systemctl --version
	I1207 22:57:02.847487  403141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:57:02.865741  403141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:57:02.958540  403141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 22:57:02.958791  403141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 22:57:02.990492  403141 cri.go:89] found id: "15d6c69879b1c9d09b82e4b5031bbbf34135b1f9bd979dea9f9f0f72f6fd51c8"
	I1207 22:57:02.990528  403141 cri.go:89] found id: "5fb12f5f4df2a1240cc8c210ab01b8888c98b0e557e9f3cc7ca744b1cea7d969"
	I1207 22:57:02.990536  403141 cri.go:89] found id: "fe56a017640b65af58831a24e810c5770fc372ade72500a7ef5cde7d37f3ff2a"
	I1207 22:57:02.990541  403141 cri.go:89] found id: "504d8b39e428bcf1fba0674f9f798df8c411b5d88014118f294c3efb546d0697"
	I1207 22:57:02.990544  403141 cri.go:89] found id: "50ad042517d0afe511c861b3ef18e6f89845648a1770b53fd53f3cc495f5a87e"
	I1207 22:57:02.990548  403141 cri.go:89] found id: "b28acd3bc252ae2090058f6c5f790414100d389c691000c749b4cc4ffeaaa79b"
	I1207 22:57:02.990551  403141 cri.go:89] found id: "1dad0dc0225103ed53f3ee4143c3ceff2347afd54237a96641893e36d40210f3"
	I1207 22:57:02.990556  403141 cri.go:89] found id: "7e6ab6bbbad333b2ff082b8ea3bab7762ffc7ef0c2ab04730063a59583be7141"
	I1207 22:57:02.990560  403141 cri.go:89] found id: "2ee9d403c718ad1071a4191fc7909302e0c5c99a980da0841bc028a064062feb"
	I1207 22:57:02.990570  403141 cri.go:89] found id: "d235bae133495f0f39c9d96866f02fe9e69074a4fa3760b3ca2223c3c55f1fdc"
	I1207 22:57:02.990579  403141 cri.go:89] found id: "dd2a1ddd16307b90c23b79922c3c697d8af8058539cc18dde5ec83dbb37624e5"
	I1207 22:57:02.990584  403141 cri.go:89] found id: "08fe42979fddbd1da206b7da0fd7f120a51c3544d5765bb4437a2b3a850217cf"
	I1207 22:57:02.990592  403141 cri.go:89] found id: "79ffbf10d4d6ab250715b396039a119ab1754f8e92841abc0705ff75b50dddad"
	I1207 22:57:02.990597  403141 cri.go:89] found id: "0a5bc6342e0fa615eb4b4c3ff68c6b411b7597a99b09c0ddfbad42f794634308"
	I1207 22:57:02.990605  403141 cri.go:89] found id: "125a62d8c60a9ec08a22d06c8690567a309e13fd8ede4423ac18b3684ed3a1eb"
	I1207 22:57:02.990613  403141 cri.go:89] found id: "f0439486741224d12b7d1a01f1b4080435a3b8ef6cee51988784ad3f75baa93a"
	I1207 22:57:02.990620  403141 cri.go:89] found id: "c09a0b77cbea1a4048f11ca0f248eaeb2aceb4d39363d2dda5f5e7c8d69b2bac"
	I1207 22:57:02.990627  403141 cri.go:89] found id: "c7ac4b9dcfe980e1f0ca5380837549fae2f8f4737f218aa46ee31003340f1f0e"
	I1207 22:57:02.990630  403141 cri.go:89] found id: "d9470261de6e4a9958176fb20e77f6052bc581ef6fa6b17b1c7111575d256855"
	I1207 22:57:02.990633  403141 cri.go:89] found id: "4cd369ec2d01ec0d2cfe7dfec0cafb653f048fc6c10abf58dfc2c354f5a55a1e"
	I1207 22:57:02.990636  403141 cri.go:89] found id: "2f96412fe3f9d0a7efea60c8dc6942a2a0b32d17e4b7caa468ec8aaad5361efb"
	I1207 22:57:02.990638  403141 cri.go:89] found id: "070b82a22d636912841f98c83010d7d2b8a760e29cb7bd78b694310c4e09a191"
	I1207 22:57:02.990641  403141 cri.go:89] found id: "bbb24b899c6b3630a13d72e60f393052186f583f097e132d0109458022915856"
	I1207 22:57:02.990643  403141 cri.go:89] found id: "cb318a4f623488d2891c4bf7dee2a7de142b6991456ad8b7f7dbeb036b386a2c"
	I1207 22:57:02.990647  403141 cri.go:89] found id: ""
	I1207 22:57:02.990699  403141 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 22:57:03.004849  403141 out.go:203] 
	W1207 22:57:03.005960  403141 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1207 22:57:03.005980  403141 out.go:285] * 
	* 
	W1207 22:57:03.010011  403141 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 22:57:03.011188  403141 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-746247 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-746247
helpers_test.go:243: (dbg) docker inspect addons-746247:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "080063613ae7b311e6fac990dd49efdbdefd2da2e0e17bc114805029bfe22ab8",
	        "Created": "2025-12-07T22:55:27.983832034Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 395583,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T22:55:28.025616139Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/080063613ae7b311e6fac990dd49efdbdefd2da2e0e17bc114805029bfe22ab8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/080063613ae7b311e6fac990dd49efdbdefd2da2e0e17bc114805029bfe22ab8/hostname",
	        "HostsPath": "/var/lib/docker/containers/080063613ae7b311e6fac990dd49efdbdefd2da2e0e17bc114805029bfe22ab8/hosts",
	        "LogPath": "/var/lib/docker/containers/080063613ae7b311e6fac990dd49efdbdefd2da2e0e17bc114805029bfe22ab8/080063613ae7b311e6fac990dd49efdbdefd2da2e0e17bc114805029bfe22ab8-json.log",
	        "Name": "/addons-746247",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-746247:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-746247",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "080063613ae7b311e6fac990dd49efdbdefd2da2e0e17bc114805029bfe22ab8",
	                "LowerDir": "/var/lib/docker/overlay2/b595ab9cfb55a9daf85c866674f02743973b1601addf08afcae02b59b38cf495-init/diff:/var/lib/docker/overlay2/d2e9c5481c0f5ed3745e4b3c85b207e8e3f273f5a1d285f7bc7bfa20976ad16e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b595ab9cfb55a9daf85c866674f02743973b1601addf08afcae02b59b38cf495/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b595ab9cfb55a9daf85c866674f02743973b1601addf08afcae02b59b38cf495/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b595ab9cfb55a9daf85c866674f02743973b1601addf08afcae02b59b38cf495/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-746247",
	                "Source": "/var/lib/docker/volumes/addons-746247/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-746247",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-746247",
	                "name.minikube.sigs.k8s.io": "addons-746247",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3d86637c0d851aa624d56a2b281f214498dfaa59d9f2878994009fab6db2049d",
	            "SandboxKey": "/var/run/docker/netns/3d86637c0d85",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-746247": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be54931499a886ac0b17f6a5741a36d0b71dc5f6d5ce5015072847b13448e7f6",
	                    "EndpointID": "be47e88703de538019e033354a322e232626840e947ae3bfe89166e4eb90973f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "be:24:c4:a1:f7:d4",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-746247",
	                        "080063613ae7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-746247 -n addons-746247
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-746247 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-746247 logs -n 25: (1.130519168s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-210257 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-210257   │ jenkins │ v1.37.0 │ 07 Dec 25 22:54 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 07 Dec 25 22:54 UTC │ 07 Dec 25 22:54 UTC │
	│ delete  │ -p download-only-210257                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-210257   │ jenkins │ v1.37.0 │ 07 Dec 25 22:54 UTC │ 07 Dec 25 22:54 UTC │
	│ start   │ -o=json --download-only -p download-only-780730 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-780730   │ jenkins │ v1.37.0 │ 07 Dec 25 22:54 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 07 Dec 25 22:54 UTC │ 07 Dec 25 22:54 UTC │
	│ delete  │ -p download-only-780730                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-780730   │ jenkins │ v1.37.0 │ 07 Dec 25 22:54 UTC │ 07 Dec 25 22:54 UTC │
	│ start   │ -o=json --download-only -p download-only-853065 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-853065   │ jenkins │ v1.37.0 │ 07 Dec 25 22:54 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │ 07 Dec 25 22:55 UTC │
	│ delete  │ -p download-only-853065                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-853065   │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │ 07 Dec 25 22:55 UTC │
	│ delete  │ -p download-only-210257                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-210257   │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │ 07 Dec 25 22:55 UTC │
	│ delete  │ -p download-only-780730                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-780730   │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │ 07 Dec 25 22:55 UTC │
	│ delete  │ -p download-only-853065                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-853065   │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │ 07 Dec 25 22:55 UTC │
	│ start   │ --download-only -p download-docker-798136 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-798136 │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │                     │
	│ delete  │ -p download-docker-798136                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-798136 │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │ 07 Dec 25 22:55 UTC │
	│ start   │ --download-only -p binary-mirror-074233 --alsologtostderr --binary-mirror http://127.0.0.1:45187 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-074233   │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │                     │
	│ delete  │ -p binary-mirror-074233                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-074233   │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │ 07 Dec 25 22:55 UTC │
	│ addons  │ enable dashboard -p addons-746247                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-746247          │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │                     │
	│ addons  │ disable dashboard -p addons-746247                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-746247          │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │                     │
	│ start   │ -p addons-746247 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-746247          │ jenkins │ v1.37.0 │ 07 Dec 25 22:55 UTC │ 07 Dec 25 22:56 UTC │
	│ addons  │ addons-746247 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-746247          │ jenkins │ v1.37.0 │ 07 Dec 25 22:56 UTC │                     │
	│ addons  │ addons-746247 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-746247          │ jenkins │ v1.37.0 │ 07 Dec 25 22:57 UTC │                     │
	│ addons  │ enable headlamp -p addons-746247 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-746247          │ jenkins │ v1.37.0 │ 07 Dec 25 22:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:55:07
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:55:07.877967  394947 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:55:07.878091  394947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:55:07.878100  394947 out.go:374] Setting ErrFile to fd 2...
	I1207 22:55:07.878105  394947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:55:07.878316  394947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 22:55:07.878883  394947 out.go:368] Setting JSON to false
	I1207 22:55:07.879777  394947 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5852,"bootTime":1765142256,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:55:07.879834  394947 start.go:143] virtualization: kvm guest
	I1207 22:55:07.881958  394947 out.go:179] * [addons-746247] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:55:07.883277  394947 notify.go:221] Checking for updates...
	I1207 22:55:07.883287  394947 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 22:55:07.884545  394947 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:55:07.885833  394947 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 22:55:07.887059  394947 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 22:55:07.888222  394947 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 22:55:07.889362  394947 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 22:55:07.890685  394947 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:55:07.917005  394947 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:55:07.917109  394947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:55:07.972282  394947 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-07 22:55:07.962463475 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:55:07.972416  394947 docker.go:319] overlay module found
	I1207 22:55:07.974972  394947 out.go:179] * Using the docker driver based on user configuration
	I1207 22:55:07.976048  394947 start.go:309] selected driver: docker
	I1207 22:55:07.976061  394947 start.go:927] validating driver "docker" against <nil>
	I1207 22:55:07.976072  394947 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 22:55:07.976664  394947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:55:08.036514  394947 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:45 SystemTime:2025-12-07 22:55:08.026605684 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:55:08.036669  394947 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1207 22:55:08.036865  394947 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 22:55:08.038621  394947 out.go:179] * Using Docker driver with root privileges
	I1207 22:55:08.039725  394947 cni.go:84] Creating CNI manager for ""
	I1207 22:55:08.039808  394947 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 22:55:08.039824  394947 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1207 22:55:08.039909  394947 start.go:353] cluster config:
	{Name:addons-746247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-746247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1207 22:55:08.041150  394947 out.go:179] * Starting "addons-746247" primary control-plane node in "addons-746247" cluster
	I1207 22:55:08.042067  394947 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 22:55:08.043164  394947 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 22:55:08.044140  394947 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 22:55:08.044167  394947 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1207 22:55:08.044186  394947 cache.go:65] Caching tarball of preloaded images
	I1207 22:55:08.044258  394947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 22:55:08.044333  394947 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 22:55:08.044348  394947 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1207 22:55:08.044742  394947 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/config.json ...
	I1207 22:55:08.044771  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/config.json: {Name:mk1ec2873a49cec8dde6b1769bdcaef76c909bf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:08.061282  394947 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1207 22:55:08.061444  394947 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1207 22:55:08.061484  394947 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1207 22:55:08.061494  394947 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1207 22:55:08.061506  394947 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1207 22:55:08.061517  394947 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from local cache
	I1207 22:55:21.204767  394947 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from cached tarball
	I1207 22:55:21.204812  394947 cache.go:243] Successfully downloaded all kic artifacts
	I1207 22:55:21.204861  394947 start.go:360] acquireMachinesLock for addons-746247: {Name:mkdac485f32371369587267e2a039908da41c790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 22:55:21.204982  394947 start.go:364] duration metric: took 98.729µs to acquireMachinesLock for "addons-746247"
	I1207 22:55:21.205007  394947 start.go:93] Provisioning new machine with config: &{Name:addons-746247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-746247 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 22:55:21.205089  394947 start.go:125] createHost starting for "" (driver="docker")
	I1207 22:55:21.207243  394947 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1207 22:55:21.207530  394947 start.go:159] libmachine.API.Create for "addons-746247" (driver="docker")
	I1207 22:55:21.207582  394947 client.go:173] LocalClient.Create starting
	I1207 22:55:21.207702  394947 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem
	I1207 22:55:21.264446  394947 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem
	I1207 22:55:21.455469  394947 cli_runner.go:164] Run: docker network inspect addons-746247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1207 22:55:21.473006  394947 cli_runner.go:211] docker network inspect addons-746247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1207 22:55:21.473073  394947 network_create.go:284] running [docker network inspect addons-746247] to gather additional debugging logs...
	I1207 22:55:21.473095  394947 cli_runner.go:164] Run: docker network inspect addons-746247
	W1207 22:55:21.489353  394947 cli_runner.go:211] docker network inspect addons-746247 returned with exit code 1
	I1207 22:55:21.489405  394947 network_create.go:287] error running [docker network inspect addons-746247]: docker network inspect addons-746247: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-746247 not found
	I1207 22:55:21.489424  394947 network_create.go:289] output of [docker network inspect addons-746247]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-746247 not found
	
	** /stderr **
	I1207 22:55:21.489597  394947 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 22:55:21.506721  394947 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c10a70}
	I1207 22:55:21.506775  394947 network_create.go:124] attempt to create docker network addons-746247 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1207 22:55:21.506842  394947 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-746247 addons-746247
	I1207 22:55:21.554823  394947 network_create.go:108] docker network addons-746247 192.168.49.0/24 created
	I1207 22:55:21.554857  394947 kic.go:121] calculated static IP "192.168.49.2" for the "addons-746247" container
	I1207 22:55:21.554924  394947 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1207 22:55:21.571502  394947 cli_runner.go:164] Run: docker volume create addons-746247 --label name.minikube.sigs.k8s.io=addons-746247 --label created_by.minikube.sigs.k8s.io=true
	I1207 22:55:21.590910  394947 oci.go:103] Successfully created a docker volume addons-746247
	I1207 22:55:21.590989  394947 cli_runner.go:164] Run: docker run --rm --name addons-746247-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-746247 --entrypoint /usr/bin/test -v addons-746247:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1207 22:55:24.094955  394947 cli_runner.go:217] Completed: docker run --rm --name addons-746247-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-746247 --entrypoint /usr/bin/test -v addons-746247:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib: (2.503924226s)
	I1207 22:55:24.094987  394947 oci.go:107] Successfully prepared a docker volume addons-746247
	I1207 22:55:24.095054  394947 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 22:55:24.095068  394947 kic.go:194] Starting extracting preloaded images to volume ...
	I1207 22:55:24.095124  394947 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-746247:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1207 22:55:27.912478  394947 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-746247:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.817292018s)
	I1207 22:55:27.912512  394947 kic.go:203] duration metric: took 3.817440652s to extract preloaded images to volume ...
	W1207 22:55:27.912595  394947 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1207 22:55:27.912624  394947 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1207 22:55:27.912666  394947 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1207 22:55:27.966788  394947 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-746247 --name addons-746247 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-746247 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-746247 --network addons-746247 --ip 192.168.49.2 --volume addons-746247:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1207 22:55:28.243798  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Running}}
	I1207 22:55:28.263872  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:28.281972  394947 cli_runner.go:164] Run: docker exec addons-746247 stat /var/lib/dpkg/alternatives/iptables
	I1207 22:55:28.326214  394947 oci.go:144] the created container "addons-746247" has a running status.
	I1207 22:55:28.326244  394947 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa...
	I1207 22:55:28.338113  394947 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1207 22:55:28.362801  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:28.384954  394947 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1207 22:55:28.384977  394947 kic_runner.go:114] Args: [docker exec --privileged addons-746247 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1207 22:55:28.425294  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:28.447721  394947 machine.go:94] provisionDockerMachine start ...
	I1207 22:55:28.447834  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:28.470101  394947 main.go:143] libmachine: Using SSH client type: native
	I1207 22:55:28.470470  394947 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1207 22:55:28.470491  394947 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 22:55:28.471237  394947 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33934->127.0.0.1:33148: read: connection reset by peer
	I1207 22:55:31.602628  394947 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-746247
	
	I1207 22:55:31.602662  394947 ubuntu.go:182] provisioning hostname "addons-746247"
	I1207 22:55:31.602747  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:31.621851  394947 main.go:143] libmachine: Using SSH client type: native
	I1207 22:55:31.622077  394947 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1207 22:55:31.622092  394947 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-746247 && echo "addons-746247" | sudo tee /etc/hostname
	I1207 22:55:31.760414  394947 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-746247
	
	I1207 22:55:31.760536  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:31.779073  394947 main.go:143] libmachine: Using SSH client type: native
	I1207 22:55:31.779315  394947 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1207 22:55:31.779352  394947 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-746247' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-746247/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-746247' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 22:55:31.909356  394947 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 22:55:31.909390  394947 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 22:55:31.909443  394947 ubuntu.go:190] setting up certificates
	I1207 22:55:31.909467  394947 provision.go:84] configureAuth start
	I1207 22:55:31.909549  394947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-746247
	I1207 22:55:31.927898  394947 provision.go:143] copyHostCerts
	I1207 22:55:31.927982  394947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 22:55:31.928114  394947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 22:55:31.928187  394947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 22:55:31.928254  394947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.addons-746247 san=[127.0.0.1 192.168.49.2 addons-746247 localhost minikube]
	I1207 22:55:32.029545  394947 provision.go:177] copyRemoteCerts
	I1207 22:55:32.029611  394947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 22:55:32.029648  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:32.048378  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:32.143012  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 22:55:32.163547  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 22:55:32.182016  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1207 22:55:32.199811  394947 provision.go:87] duration metric: took 290.321463ms to configureAuth
	I1207 22:55:32.199845  394947 ubuntu.go:206] setting minikube options for container-runtime
	I1207 22:55:32.200051  394947 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:55:32.200165  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:32.218883  394947 main.go:143] libmachine: Using SSH client type: native
	I1207 22:55:32.219141  394947 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1207 22:55:32.219158  394947 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 22:55:32.494750  394947 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 22:55:32.494780  394947 machine.go:97] duration metric: took 4.04701985s to provisionDockerMachine
	I1207 22:55:32.494794  394947 client.go:176] duration metric: took 11.287202498s to LocalClient.Create
	I1207 22:55:32.494808  394947 start.go:167] duration metric: took 11.287280187s to libmachine.API.Create "addons-746247"
	I1207 22:55:32.494817  394947 start.go:293] postStartSetup for "addons-746247" (driver="docker")
	I1207 22:55:32.494829  394947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 22:55:32.494891  394947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 22:55:32.494941  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:32.512633  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:32.608355  394947 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 22:55:32.612206  394947 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 22:55:32.612233  394947 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 22:55:32.612246  394947 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 22:55:32.612311  394947 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 22:55:32.612365  394947 start.go:296] duration metric: took 117.540414ms for postStartSetup
	I1207 22:55:32.612685  394947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-746247
	I1207 22:55:32.630216  394947 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/config.json ...
	I1207 22:55:32.630539  394947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 22:55:32.630583  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:32.648357  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:32.740828  394947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 22:55:32.745718  394947 start.go:128] duration metric: took 11.540610455s to createHost
	I1207 22:55:32.745758  394947 start.go:83] releasing machines lock for "addons-746247", held for 11.540764054s
	I1207 22:55:32.745835  394947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-746247
	I1207 22:55:32.763862  394947 ssh_runner.go:195] Run: cat /version.json
	I1207 22:55:32.763910  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:32.763976  394947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 22:55:32.764065  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:32.782121  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:32.783160  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:32.927228  394947 ssh_runner.go:195] Run: systemctl --version
	I1207 22:55:32.934304  394947 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 22:55:32.970418  394947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 22:55:32.975421  394947 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 22:55:32.975501  394947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 22:55:33.002259  394947 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 22:55:33.002284  394947 start.go:496] detecting cgroup driver to use...
	I1207 22:55:33.002315  394947 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 22:55:33.002398  394947 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 22:55:33.019553  394947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 22:55:33.032649  394947 docker.go:218] disabling cri-docker service (if available) ...
	I1207 22:55:33.032723  394947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 22:55:33.049663  394947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 22:55:33.067499  394947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 22:55:33.151706  394947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 22:55:33.238552  394947 docker.go:234] disabling docker service ...
	I1207 22:55:33.238620  394947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 22:55:33.258358  394947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 22:55:33.271151  394947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 22:55:33.356271  394947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 22:55:33.439672  394947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 22:55:33.452840  394947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 22:55:33.467089  394947 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 22:55:33.467152  394947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 22:55:33.477450  394947 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 22:55:33.477522  394947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 22:55:33.486169  394947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 22:55:33.495142  394947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 22:55:33.504242  394947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 22:55:33.512505  394947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 22:55:33.521180  394947 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 22:55:33.534828  394947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 22:55:33.543772  394947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 22:55:33.550871  394947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 22:55:33.558319  394947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 22:55:33.639086  394947 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 22:55:33.776444  394947 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 22:55:33.776531  394947 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 22:55:33.780595  394947 start.go:564] Will wait 60s for crictl version
	I1207 22:55:33.780645  394947 ssh_runner.go:195] Run: which crictl
	I1207 22:55:33.784139  394947 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 22:55:33.810930  394947 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 22:55:33.811035  394947 ssh_runner.go:195] Run: crio --version
	I1207 22:55:33.839409  394947 ssh_runner.go:195] Run: crio --version
	I1207 22:55:33.869699  394947 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1207 22:55:33.870860  394947 cli_runner.go:164] Run: docker network inspect addons-746247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 22:55:33.888236  394947 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1207 22:55:33.892570  394947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 22:55:33.903004  394947 kubeadm.go:884] updating cluster {Name:addons-746247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-746247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 22:55:33.903142  394947 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 22:55:33.903192  394947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 22:55:33.935593  394947 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 22:55:33.935615  394947 crio.go:433] Images already preloaded, skipping extraction
	I1207 22:55:33.935661  394947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 22:55:33.961754  394947 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 22:55:33.961777  394947 cache_images.go:86] Images are preloaded, skipping loading
	I1207 22:55:33.961785  394947 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1207 22:55:33.961878  394947 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-746247 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-746247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 22:55:33.961940  394947 ssh_runner.go:195] Run: crio config
	I1207 22:55:34.006804  394947 cni.go:84] Creating CNI manager for ""
	I1207 22:55:34.006829  394947 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 22:55:34.006847  394947 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 22:55:34.006869  394947 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-746247 NodeName:addons-746247 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 22:55:34.006985  394947 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-746247"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 22:55:34.007053  394947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 22:55:34.015762  394947 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 22:55:34.015826  394947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 22:55:34.024072  394947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1207 22:55:34.037148  394947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 22:55:34.053353  394947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1207 22:55:34.066929  394947 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1207 22:55:34.070768  394947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 22:55:34.080977  394947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 22:55:34.161866  394947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 22:55:34.188017  394947 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247 for IP: 192.168.49.2
	I1207 22:55:34.188043  394947 certs.go:195] generating shared ca certs ...
	I1207 22:55:34.188063  394947 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:34.188229  394947 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 22:55:34.249472  394947 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt ...
	I1207 22:55:34.249503  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt: {Name:mkd69947a3567aa7d942ff19b503205a04e259b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:34.249687  394947 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key ...
	I1207 22:55:34.249700  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key: {Name:mk2e9ee7c00196d91bb45d703a62468cec7da9a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:34.249785  394947 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 22:55:34.311480  394947 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt ...
	I1207 22:55:34.311514  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt: {Name:mke7df825abb9dd8867e3bf7c96a7f60cd0e4178 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:34.311690  394947 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key ...
	I1207 22:55:34.311703  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key: {Name:mk1346b56082063bd94f4694763c569b1bb6e322 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:34.311775  394947 certs.go:257] generating profile certs ...
	I1207 22:55:34.311830  394947 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.key
	I1207 22:55:34.311844  394947 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt with IP's: []
	I1207 22:55:34.459886  394947 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt ...
	I1207 22:55:34.459919  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: {Name:mkda54fd8d145dcd877ec8773e9ab29431d85549 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:34.460096  394947 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.key ...
	I1207 22:55:34.460107  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.key: {Name:mk5d90aa2133412a9a7228d919ee55c2bf5e8d2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:34.460174  394947 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.key.7aa9af9f
	I1207 22:55:34.460194  394947 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.crt.7aa9af9f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1207 22:55:34.512853  394947 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.crt.7aa9af9f ...
	I1207 22:55:34.512882  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.crt.7aa9af9f: {Name:mk7435cc211dd19633fb876b7aac8cc207f2fb1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:34.513042  394947 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.key.7aa9af9f ...
	I1207 22:55:34.513055  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.key.7aa9af9f: {Name:mk7642198a25c6ebb0765ede998b554bfc92b3d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:34.513127  394947 certs.go:382] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.crt.7aa9af9f -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.crt
	I1207 22:55:34.513197  394947 certs.go:386] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.key.7aa9af9f -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.key
	I1207 22:55:34.513246  394947 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/proxy-client.key
	I1207 22:55:34.513264  394947 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/proxy-client.crt with IP's: []
	I1207 22:55:34.539020  394947 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/proxy-client.crt ...
	I1207 22:55:34.539051  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/proxy-client.crt: {Name:mk880bdd2296a66ea10ef4a4a54c6b9c4d0d737d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:34.539197  394947 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/proxy-client.key ...
	I1207 22:55:34.539211  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/proxy-client.key: {Name:mk380aecc8ca7a8b0bbbb2d69c01405c028eeba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:34.539395  394947 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 22:55:34.539440  394947 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 22:55:34.539465  394947 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 22:55:34.539495  394947 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 22:55:34.540084  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 22:55:34.559178  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 22:55:34.577378  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 22:55:34.595015  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 22:55:34.612564  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1207 22:55:34.629867  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 22:55:34.647472  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 22:55:34.665693  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 22:55:34.683157  394947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 22:55:34.702841  394947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 22:55:34.715410  394947 ssh_runner.go:195] Run: openssl version
	I1207 22:55:34.721638  394947 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:55:34.729187  394947 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 22:55:34.739401  394947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:55:34.743371  394947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:55:34.743427  394947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 22:55:34.777970  394947 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 22:55:34.786068  394947 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1207 22:55:34.793700  394947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 22:55:34.797444  394947 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1207 22:55:34.797494  394947 kubeadm.go:401] StartCluster: {Name:addons-746247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-746247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:55:34.797569  394947 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 22:55:34.797614  394947 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 22:55:34.824698  394947 cri.go:89] found id: ""
	I1207 22:55:34.824774  394947 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 22:55:34.832990  394947 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 22:55:34.841056  394947 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1207 22:55:34.841119  394947 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 22:55:34.849053  394947 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 22:55:34.849072  394947 kubeadm.go:158] found existing configuration files:
	
	I1207 22:55:34.849111  394947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1207 22:55:34.857140  394947 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1207 22:55:34.857210  394947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1207 22:55:34.864640  394947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1207 22:55:34.872159  394947 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1207 22:55:34.872209  394947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1207 22:55:34.879374  394947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1207 22:55:34.886894  394947 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1207 22:55:34.886944  394947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1207 22:55:34.894631  394947 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1207 22:55:34.902360  394947 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1207 22:55:34.902426  394947 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1207 22:55:34.909754  394947 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1207 22:55:34.947620  394947 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1207 22:55:34.947701  394947 kubeadm.go:319] [preflight] Running pre-flight checks
	I1207 22:55:34.981278  394947 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1207 22:55:34.981402  394947 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1207 22:55:34.981475  394947 kubeadm.go:319] OS: Linux
	I1207 22:55:34.981551  394947 kubeadm.go:319] CGROUPS_CPU: enabled
	I1207 22:55:34.981631  394947 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1207 22:55:34.981706  394947 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1207 22:55:34.981797  394947 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1207 22:55:34.981882  394947 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1207 22:55:34.981946  394947 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1207 22:55:34.982024  394947 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1207 22:55:34.982087  394947 kubeadm.go:319] CGROUPS_IO: enabled
	I1207 22:55:35.041416  394947 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 22:55:35.041571  394947 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 22:55:35.041727  394947 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1207 22:55:35.049517  394947 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 22:55:35.052510  394947 out.go:252]   - Generating certificates and keys ...
	I1207 22:55:35.052629  394947 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1207 22:55:35.052727  394947 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1207 22:55:35.336449  394947 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 22:55:35.371707  394947 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1207 22:55:35.842888  394947 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1207 22:55:35.919018  394947 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1207 22:55:36.037801  394947 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1207 22:55:36.037963  394947 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-746247 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1207 22:55:36.144113  394947 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1207 22:55:36.144260  394947 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-746247 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1207 22:55:36.322581  394947 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 22:55:36.793949  394947 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 22:55:37.264164  394947 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1207 22:55:37.264302  394947 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 22:55:37.454215  394947 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 22:55:37.575913  394947 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 22:55:37.708250  394947 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 22:55:37.884302  394947 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 22:55:37.943668  394947 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 22:55:37.944192  394947 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 22:55:37.948013  394947 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 22:55:37.949649  394947 out.go:252]   - Booting up control plane ...
	I1207 22:55:37.949759  394947 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 22:55:37.949848  394947 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 22:55:37.950599  394947 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 22:55:37.978499  394947 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 22:55:37.978640  394947 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1207 22:55:37.985244  394947 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1207 22:55:37.986238  394947 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 22:55:37.986321  394947 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1207 22:55:38.084863  394947 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1207 22:55:38.084993  394947 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1207 22:55:39.086596  394947 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00186562s
	I1207 22:55:39.090916  394947 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1207 22:55:39.091045  394947 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1207 22:55:39.091217  394947 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1207 22:55:39.091384  394947 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1207 22:55:40.967963  394947 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.877067951s
	I1207 22:55:41.279216  394947 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.188278544s
	I1207 22:55:42.592959  394947 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502005s
	I1207 22:55:42.608410  394947 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 22:55:42.620715  394947 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 22:55:42.630255  394947 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 22:55:42.630534  394947 kubeadm.go:319] [mark-control-plane] Marking the node addons-746247 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 22:55:42.637975  394947 kubeadm.go:319] [bootstrap-token] Using token: y88hmj.0itrb6u5xpqhln4u
	I1207 22:55:42.639462  394947 out.go:252]   - Configuring RBAC rules ...
	I1207 22:55:42.639606  394947 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 22:55:42.642464  394947 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 22:55:42.647610  394947 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 22:55:42.650838  394947 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 22:55:42.653312  394947 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 22:55:42.655666  394947 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 22:55:42.999761  394947 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 22:55:43.416415  394947 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1207 22:55:43.998632  394947 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1207 22:55:43.999491  394947 kubeadm.go:319] 
	I1207 22:55:43.999621  394947 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1207 22:55:43.999638  394947 kubeadm.go:319] 
	I1207 22:55:43.999728  394947 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1207 22:55:43.999738  394947 kubeadm.go:319] 
	I1207 22:55:43.999760  394947 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1207 22:55:43.999842  394947 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 22:55:43.999925  394947 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 22:55:43.999935  394947 kubeadm.go:319] 
	I1207 22:55:44.000031  394947 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1207 22:55:44.000043  394947 kubeadm.go:319] 
	I1207 22:55:44.000113  394947 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 22:55:44.000122  394947 kubeadm.go:319] 
	I1207 22:55:44.000198  394947 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1207 22:55:44.000264  394947 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 22:55:44.000375  394947 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 22:55:44.000392  394947 kubeadm.go:319] 
	I1207 22:55:44.000519  394947 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 22:55:44.000637  394947 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1207 22:55:44.000649  394947 kubeadm.go:319] 
	I1207 22:55:44.000782  394947 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token y88hmj.0itrb6u5xpqhln4u \
	I1207 22:55:44.000931  394947 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a6f9ffe32c21ad638ebba2743e15f014ccba55b6baef971adb92cbf8edf27a49 \
	I1207 22:55:44.000952  394947 kubeadm.go:319] 	--control-plane 
	I1207 22:55:44.000956  394947 kubeadm.go:319] 
	I1207 22:55:44.001084  394947 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1207 22:55:44.001093  394947 kubeadm.go:319] 
	I1207 22:55:44.001160  394947 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token y88hmj.0itrb6u5xpqhln4u \
	I1207 22:55:44.001284  394947 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a6f9ffe32c21ad638ebba2743e15f014ccba55b6baef971adb92cbf8edf27a49 
	I1207 22:55:44.002766  394947 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1207 22:55:44.002927  394947 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 22:55:44.002960  394947 cni.go:84] Creating CNI manager for ""
	I1207 22:55:44.002974  394947 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 22:55:44.004664  394947 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1207 22:55:44.005686  394947 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1207 22:55:44.010044  394947 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1207 22:55:44.010071  394947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1207 22:55:44.023840  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1207 22:55:44.235804  394947 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 22:55:44.235903  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:55:44.235938  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-746247 minikube.k8s.io/updated_at=2025_12_07T22_55_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47 minikube.k8s.io/name=addons-746247 minikube.k8s.io/primary=true
	I1207 22:55:44.247539  394947 ops.go:34] apiserver oom_adj: -16
	I1207 22:55:44.311719  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:55:44.812580  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:55:45.312423  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:55:45.811825  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:55:46.312815  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:55:46.811918  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:55:47.312623  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:55:47.812761  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:55:48.311811  394947 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 22:55:48.378590  394947 kubeadm.go:1114] duration metric: took 4.142753066s to wait for elevateKubeSystemPrivileges
	I1207 22:55:48.378625  394947 kubeadm.go:403] duration metric: took 13.581135159s to StartCluster
	I1207 22:55:48.378643  394947 settings.go:142] acquiring lock: {Name:mk372e79badb9c8f25216fa891cff6dfa96ea2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:48.378770  394947 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 22:55:48.379198  394947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 22:55:48.379473  394947 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 22:55:48.379496  394947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 22:55:48.379537  394947 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1207 22:55:48.379654  394947 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:55:48.379673  394947 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-746247"
	I1207 22:55:48.379682  394947 addons.go:70] Setting yakd=true in profile "addons-746247"
	I1207 22:55:48.379701  394947 addons.go:70] Setting storage-provisioner=true in profile "addons-746247"
	I1207 22:55:48.379714  394947 addons.go:70] Setting default-storageclass=true in profile "addons-746247"
	I1207 22:55:48.379715  394947 addons.go:239] Setting addon yakd=true in "addons-746247"
	I1207 22:55:48.379722  394947 addons.go:239] Setting addon storage-provisioner=true in "addons-746247"
	I1207 22:55:48.379730  394947 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-746247"
	I1207 22:55:48.379723  394947 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-746247"
	I1207 22:55:48.379746  394947 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-746247"
	I1207 22:55:48.379760  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.379762  394947 addons.go:70] Setting ingress-dns=true in profile "addons-746247"
	I1207 22:55:48.379768  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.379769  394947 addons.go:70] Setting gcp-auth=true in profile "addons-746247"
	I1207 22:55:48.379774  394947 addons.go:239] Setting addon ingress-dns=true in "addons-746247"
	I1207 22:55:48.379783  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.379790  394947 mustload.go:66] Loading cluster: addons-746247
	I1207 22:55:48.379854  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.379752  394947 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-746247"
	I1207 22:55:48.379899  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.379962  394947 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:55:48.380129  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.380188  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.380253  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.380272  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.380286  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.380292  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.380511  394947 addons.go:70] Setting volcano=true in profile "addons-746247"
	I1207 22:55:48.380537  394947 addons.go:239] Setting addon volcano=true in "addons-746247"
	I1207 22:55:48.380570  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.380615  394947 addons.go:70] Setting volumesnapshots=true in profile "addons-746247"
	I1207 22:55:48.380640  394947 addons.go:239] Setting addon volumesnapshots=true in "addons-746247"
	I1207 22:55:48.380685  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.381048  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.381146  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.381279  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.381390  394947 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-746247"
	I1207 22:55:48.381416  394947 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-746247"
	I1207 22:55:48.381441  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.381697  394947 addons.go:70] Setting registry=true in profile "addons-746247"
	I1207 22:55:48.381723  394947 addons.go:239] Setting addon registry=true in "addons-746247"
	I1207 22:55:48.381751  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.382140  394947 addons.go:70] Setting cloud-spanner=true in profile "addons-746247"
	I1207 22:55:48.379742  394947 addons.go:70] Setting ingress=true in profile "addons-746247"
	I1207 22:55:48.382172  394947 addons.go:239] Setting addon cloud-spanner=true in "addons-746247"
	I1207 22:55:48.382179  394947 addons.go:239] Setting addon ingress=true in "addons-746247"
	I1207 22:55:48.382196  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.382207  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.382256  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.382275  394947 addons.go:70] Setting registry-creds=true in profile "addons-746247"
	I1207 22:55:48.382293  394947 addons.go:239] Setting addon registry-creds=true in "addons-746247"
	I1207 22:55:48.382322  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.382603  394947 out.go:179] * Verifying Kubernetes components...
	I1207 22:55:48.382776  394947 addons.go:70] Setting inspektor-gadget=true in profile "addons-746247"
	I1207 22:55:48.382797  394947 addons.go:239] Setting addon inspektor-gadget=true in "addons-746247"
	I1207 22:55:48.382822  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.383622  394947 addons.go:70] Setting metrics-server=true in profile "addons-746247"
	I1207 22:55:48.383675  394947 addons.go:239] Setting addon metrics-server=true in "addons-746247"
	I1207 22:55:48.383709  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.384578  394947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 22:55:48.384580  394947 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-746247"
	I1207 22:55:48.385359  394947 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-746247"
	I1207 22:55:48.394718  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.395362  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.395778  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.396409  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.396699  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.397604  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.399857  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.417494  394947 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1207 22:55:48.418860  394947 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1207 22:55:48.418957  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1207 22:55:48.419115  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.434049  394947 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1207 22:55:48.435322  394947 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1207 22:55:48.435358  394947 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1207 22:55:48.435430  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.438298  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.439871  394947 addons.go:239] Setting addon default-storageclass=true in "addons-746247"
	I1207 22:55:48.439924  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.440436  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.457878  394947 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1207 22:55:48.458006  394947 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1207 22:55:48.461297  394947 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1207 22:55:48.461339  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1207 22:55:48.461406  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.469706  394947 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 22:55:48.469736  394947 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 22:55:48.469808  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	W1207 22:55:48.477402  394947 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1207 22:55:48.479468  394947 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 22:55:48.479499  394947 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 22:55:48.479576  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.484547  394947 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1207 22:55:48.484572  394947 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1207 22:55:48.484547  394947 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1207 22:55:48.484547  394947 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1207 22:55:48.485893  394947 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1207 22:55:48.485940  394947 out.go:179]   - Using image docker.io/registry:3.0.0
	I1207 22:55:48.487038  394947 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1207 22:55:48.487057  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1207 22:55:48.487118  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.487384  394947 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1207 22:55:48.487499  394947 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1207 22:55:48.487511  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1207 22:55:48.487558  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.488980  394947 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1207 22:55:48.489037  394947 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1207 22:55:48.490239  394947 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1207 22:55:48.490257  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1207 22:55:48.490313  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.491551  394947 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-746247"
	I1207 22:55:48.491595  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:48.492092  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:48.492158  394947 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 22:55:48.492169  394947 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1207 22:55:48.492297  394947 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1207 22:55:48.493916  394947 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1207 22:55:48.493935  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1207 22:55:48.493988  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.497375  394947 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1207 22:55:48.497641  394947 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1207 22:55:48.497658  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1207 22:55:48.497719  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.498044  394947 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 22:55:48.498070  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 22:55:48.498123  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.500809  394947 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1207 22:55:48.501936  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.501962  394947 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1207 22:55:48.503823  394947 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1207 22:55:48.506452  394947 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1207 22:55:48.507531  394947 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1207 22:55:48.507552  394947 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1207 22:55:48.507635  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.513752  394947 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1207 22:55:48.514817  394947 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1207 22:55:48.514842  394947 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1207 22:55:48.514923  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.522213  394947 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1207 22:55:48.523223  394947 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1207 22:55:48.523251  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1207 22:55:48.523319  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.532478  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.540579  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.540479  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.565853  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.566352  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.571824  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.571983  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.572500  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.575094  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.586021  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.590462  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.591716  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.592905  394947 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1207 22:55:48.594345  394947 out.go:179]   - Using image docker.io/busybox:stable
	W1207 22:55:48.595012  394947 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1207 22:55:48.595048  394947 retry.go:31] will retry after 371.698503ms: ssh: handshake failed: EOF
	I1207 22:55:48.595614  394947 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1207 22:55:48.595643  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1207 22:55:48.595702  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:48.603267  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.611502  394947 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 22:55:48.611559  394947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 22:55:48.632588  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:48.639184  394947 node_ready.go:35] waiting up to 6m0s for node "addons-746247" to be "Ready" ...
	I1207 22:55:48.692347  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1207 22:55:48.692721  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1207 22:55:48.701336  394947 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1207 22:55:48.701370  394947 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1207 22:55:48.709515  394947 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1207 22:55:48.709563  394947 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1207 22:55:48.720956  394947 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1207 22:55:48.720981  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1207 22:55:48.739153  394947 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1207 22:55:48.739178  394947 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1207 22:55:48.742627  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1207 22:55:48.746094  394947 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1207 22:55:48.746123  394947 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1207 22:55:48.746846  394947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 22:55:48.746873  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1207 22:55:48.762198  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1207 22:55:48.767334  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 22:55:48.767900  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1207 22:55:48.775682  394947 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1207 22:55:48.775710  394947 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1207 22:55:48.778356  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1207 22:55:48.780675  394947 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1207 22:55:48.780711  394947 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1207 22:55:48.783595  394947 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1207 22:55:48.783625  394947 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1207 22:55:48.783695  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 22:55:48.783793  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1207 22:55:48.783804  394947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 22:55:48.783859  394947 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 22:55:48.801109  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1207 22:55:48.810803  394947 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1207 22:55:48.810837  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1207 22:55:48.824932  394947 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1207 22:55:48.824967  394947 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1207 22:55:48.834994  394947 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 22:55:48.835021  394947 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 22:55:48.845550  394947 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1207 22:55:48.845662  394947 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1207 22:55:48.868550  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1207 22:55:48.873612  394947 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1207 22:55:48.873718  394947 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1207 22:55:48.888394  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 22:55:48.897212  394947 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1207 22:55:48.897242  394947 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1207 22:55:48.926572  394947 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1207 22:55:48.926600  394947 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1207 22:55:48.934181  394947 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1207 22:55:48.934217  394947 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1207 22:55:48.977730  394947 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1207 22:55:48.977772  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1207 22:55:49.013149  394947 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1207 22:55:49.013181  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1207 22:55:49.031992  394947 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1207 22:55:49.032037  394947 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1207 22:55:49.054683  394947 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1207 22:55:49.080316  394947 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1207 22:55:49.080444  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1207 22:55:49.081540  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1207 22:55:49.126158  394947 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1207 22:55:49.126240  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1207 22:55:49.160188  394947 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1207 22:55:49.160298  394947 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1207 22:55:49.224109  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1207 22:55:49.227686  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1207 22:55:49.562077  394947 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-746247" context rescaled to 1 replicas
	I1207 22:55:50.013303  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.270633102s)
	I1207 22:55:50.013983  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.12550754s)
	I1207 22:55:50.014021  394947 addons.go:495] Verifying addon metrics-server=true in "addons-746247"
	I1207 22:55:50.013980  394947 addons.go:495] Verifying addon ingress=true in "addons-746247"
	I1207 22:55:50.013469  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.246111481s)
	I1207 22:55:50.013531  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.245574215s)
	I1207 22:55:50.013659  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.235275584s)
	I1207 22:55:50.013691  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.229970065s)
	I1207 22:55:50.013775  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.229948198s)
	I1207 22:55:50.014384  394947 addons.go:495] Verifying addon registry=true in "addons-746247"
	I1207 22:55:50.013857  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.212722554s)
	I1207 22:55:50.013911  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.145264255s)
	I1207 22:55:50.013390  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.251155154s)
	I1207 22:55:50.016688  394947 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-746247 service yakd-dashboard -n yakd-dashboard
	
	I1207 22:55:50.016688  394947 out.go:179] * Verifying ingress addon...
	I1207 22:55:50.016730  394947 out.go:179] * Verifying registry addon...
	I1207 22:55:50.019404  394947 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1207 22:55:50.019405  394947 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1207 22:55:50.025283  394947 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1207 22:55:50.025305  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:50.026557  394947 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1207 22:55:50.026581  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1207 22:55:50.027932  394947 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1207 22:55:50.484889  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.403258866s)
	W1207 22:55:50.484961  394947 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1207 22:55:50.484986  394947 retry.go:31] will retry after 352.93442ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1207 22:55:50.485265  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.261108175s)
	I1207 22:55:50.485303  394947 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-746247"
	I1207 22:55:50.485385  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.257604959s)
	I1207 22:55:50.488628  394947 out.go:179] * Verifying csi-hostpath-driver addon...
	I1207 22:55:50.491055  394947 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1207 22:55:50.495408  394947 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1207 22:55:50.495428  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:50.598287  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:50.598417  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1207 22:55:50.643208  394947 node_ready.go:57] node "addons-746247" has "Ready":"False" status (will retry)
	I1207 22:55:50.838181  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1207 22:55:50.995062  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:51.022573  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:51.022711  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:51.494545  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:51.522541  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:51.522635  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:51.994353  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:52.023356  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:52.023469  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:52.494860  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:52.595384  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:52.595570  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:52.994884  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:53.022517  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:53.022705  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1207 22:55:53.142989  394947 node_ready.go:57] node "addons-746247" has "Ready":"False" status (will retry)
	I1207 22:55:53.353652  394947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.515421268s)
	I1207 22:55:53.494468  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:53.523073  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:53.523258  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:53.994579  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:54.022308  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:54.022543  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:54.494702  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:54.522423  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:54.522507  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:54.994287  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:55.023137  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:55.023196  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:55.494479  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:55.523461  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:55.523608  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1207 22:55:55.642478  394947 node_ready.go:57] node "addons-746247" has "Ready":"False" status (will retry)
	I1207 22:55:55.995195  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:56.023162  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:56.023177  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:56.047400  394947 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1207 22:55:56.047467  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:56.065963  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:56.170574  394947 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1207 22:55:56.184190  394947 addons.go:239] Setting addon gcp-auth=true in "addons-746247"
	I1207 22:55:56.184258  394947 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:55:56.184852  394947 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:55:56.203245  394947 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1207 22:55:56.203320  394947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:55:56.222857  394947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:55:56.315851  394947 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1207 22:55:56.316923  394947 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1207 22:55:56.317892  394947 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1207 22:55:56.317912  394947 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1207 22:55:56.331739  394947 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1207 22:55:56.331765  394947 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1207 22:55:56.345721  394947 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1207 22:55:56.345742  394947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1207 22:55:56.359848  394947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1207 22:55:56.494991  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:56.522752  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:56.522832  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:56.672752  394947 addons.go:495] Verifying addon gcp-auth=true in "addons-746247"
	I1207 22:55:56.674099  394947 out.go:179] * Verifying gcp-auth addon...
	I1207 22:55:56.675875  394947 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1207 22:55:56.680263  394947 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1207 22:55:56.680287  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:55:56.995035  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:57.022386  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:57.022526  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:57.179304  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:55:57.494596  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:57.522316  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:57.522412  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:57.680160  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:55:57.994256  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:58.022960  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:58.023118  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1207 22:55:58.142921  394947 node_ready.go:57] node "addons-746247" has "Ready":"False" status (will retry)
	I1207 22:55:58.178919  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:55:58.494142  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:58.523165  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:58.523171  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:58.679648  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:55:58.994665  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:59.022740  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:59.022832  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:59.179043  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:55:59.494440  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:55:59.523238  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:55:59.523433  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:55:59.679744  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:55:59.994792  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:00.022610  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:00.022765  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:00.179283  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:00.494392  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:00.523292  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:00.523497  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1207 22:56:00.642217  394947 node_ready.go:57] node "addons-746247" has "Ready":"False" status (will retry)
	I1207 22:56:00.679309  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:00.999184  394947 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1207 22:56:00.999214  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:01.024318  394947 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1207 22:56:01.024381  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:01.024612  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:01.142293  394947 node_ready.go:49] node "addons-746247" is "Ready"
	I1207 22:56:01.142357  394947 node_ready.go:38] duration metric: took 12.503123434s for node "addons-746247" to be "Ready" ...
	I1207 22:56:01.142378  394947 api_server.go:52] waiting for apiserver process to appear ...
	I1207 22:56:01.142443  394947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 22:56:01.160923  394947 api_server.go:72] duration metric: took 12.781396676s to wait for apiserver process to appear ...
	I1207 22:56:01.160960  394947 api_server.go:88] waiting for apiserver healthz status ...
	I1207 22:56:01.160986  394947 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1207 22:56:01.166747  394947 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1207 22:56:01.167990  394947 api_server.go:141] control plane version: v1.34.2
	I1207 22:56:01.168028  394947 api_server.go:131] duration metric: took 7.059712ms to wait for apiserver health ...
	I1207 22:56:01.168039  394947 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 22:56:01.177516  394947 system_pods.go:59] 20 kube-system pods found
	I1207 22:56:01.177567  394947 system_pods.go:61] "amd-gpu-device-plugin-kblb2" [0d7d3c61-b559-4b2d-ad9c-0c55bd5a52ee] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1207 22:56:01.177579  394947 system_pods.go:61] "coredns-66bc5c9577-tphvv" [7beb0e82-6dc4-4096-af61-36892f47cffa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 22:56:01.177592  394947 system_pods.go:61] "csi-hostpath-attacher-0" [a5354250-4aeb-4575-aedb-24c6f8664823] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 22:56:01.177600  394947 system_pods.go:61] "csi-hostpath-resizer-0" [0706c1a6-d865-41e1-b896-5466613da19a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1207 22:56:01.177609  394947 system_pods.go:61] "csi-hostpathplugin-x5hj6" [4b6180c4-31ad-42af-bba8-c8c05417d718] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 22:56:01.177616  394947 system_pods.go:61] "etcd-addons-746247" [e4ee9c9c-4d01-4b73-ac5a-1bbbd97bbe79] Running
	I1207 22:56:01.177623  394947 system_pods.go:61] "kindnet-r872z" [64913453-1fd0-4d9e-80e0-f4e33f99b8ff] Running
	I1207 22:56:01.177628  394947 system_pods.go:61] "kube-apiserver-addons-746247" [501e8522-edbc-4fff-bb71-a85168d6c576] Running
	I1207 22:56:01.177635  394947 system_pods.go:61] "kube-controller-manager-addons-746247" [4ebf58bd-0977-4c58-b77d-e20f01592d9d] Running
	I1207 22:56:01.177643  394947 system_pods.go:61] "kube-ingress-dns-minikube" [b03239fb-2faa-41b2-bc04-248413da0752] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1207 22:56:01.177648  394947 system_pods.go:61] "kube-proxy-j7cvz" [9f89bed5-657e-40e5-b6d4-f90d6c36743e] Running
	I1207 22:56:01.177654  394947 system_pods.go:61] "kube-scheduler-addons-746247" [090303be-b2fa-46c7-bec7-ae11cd33ab78] Running
	I1207 22:56:01.177667  394947 system_pods.go:61] "metrics-server-85b7d694d7-jnsx9" [2733de69-8b13-43ab-8b4e-a11f01ca6694] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 22:56:01.177675  394947 system_pods.go:61] "nvidia-device-plugin-daemonset-gpckr" [db82d55a-0dbb-4348-a938-da80fe468a31] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 22:56:01.177684  394947 system_pods.go:61] "registry-6b586f9694-wsdqp" [56184daa-e3a4-46ca-b017-5a3dd986f623] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1207 22:56:01.177691  394947 system_pods.go:61] "registry-creds-764b6fb674-vl9gn" [fd8e6cfd-a85b-4980-b193-cf4b6f8bc5b4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1207 22:56:01.177700  394947 system_pods.go:61] "registry-proxy-d7n5r" [bfdc5400-c591-460d-89bb-87f432c0b904] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1207 22:56:01.177710  394947 system_pods.go:61] "snapshot-controller-7d9fbc56b8-lg5vk" [d2468d0f-bdb3-4321-a85a-ac7e3fc46b69] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:56:01.177723  394947 system_pods.go:61] "snapshot-controller-7d9fbc56b8-nzqtx" [c2ea9276-b890-436f-9681-f173192e1580] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:56:01.177733  394947 system_pods.go:61] "storage-provisioner" [f3580680-aa34-475b-a6a6-1c280b516ae0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 22:56:01.177747  394947 system_pods.go:74] duration metric: took 9.700592ms to wait for pod list to return data ...
	I1207 22:56:01.177762  394947 default_sa.go:34] waiting for default service account to be created ...
	I1207 22:56:01.179991  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:01.180680  394947 default_sa.go:45] found service account: "default"
	I1207 22:56:01.180708  394947 default_sa.go:55] duration metric: took 2.935344ms for default service account to be created ...
	I1207 22:56:01.180720  394947 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 22:56:01.185104  394947 system_pods.go:86] 20 kube-system pods found
	I1207 22:56:01.185149  394947 system_pods.go:89] "amd-gpu-device-plugin-kblb2" [0d7d3c61-b559-4b2d-ad9c-0c55bd5a52ee] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1207 22:56:01.185161  394947 system_pods.go:89] "coredns-66bc5c9577-tphvv" [7beb0e82-6dc4-4096-af61-36892f47cffa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 22:56:01.185170  394947 system_pods.go:89] "csi-hostpath-attacher-0" [a5354250-4aeb-4575-aedb-24c6f8664823] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 22:56:01.185178  394947 system_pods.go:89] "csi-hostpath-resizer-0" [0706c1a6-d865-41e1-b896-5466613da19a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1207 22:56:01.185188  394947 system_pods.go:89] "csi-hostpathplugin-x5hj6" [4b6180c4-31ad-42af-bba8-c8c05417d718] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 22:56:01.185193  394947 system_pods.go:89] "etcd-addons-746247" [e4ee9c9c-4d01-4b73-ac5a-1bbbd97bbe79] Running
	I1207 22:56:01.185199  394947 system_pods.go:89] "kindnet-r872z" [64913453-1fd0-4d9e-80e0-f4e33f99b8ff] Running
	I1207 22:56:01.185204  394947 system_pods.go:89] "kube-apiserver-addons-746247" [501e8522-edbc-4fff-bb71-a85168d6c576] Running
	I1207 22:56:01.185210  394947 system_pods.go:89] "kube-controller-manager-addons-746247" [4ebf58bd-0977-4c58-b77d-e20f01592d9d] Running
	I1207 22:56:01.185217  394947 system_pods.go:89] "kube-ingress-dns-minikube" [b03239fb-2faa-41b2-bc04-248413da0752] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1207 22:56:01.185224  394947 system_pods.go:89] "kube-proxy-j7cvz" [9f89bed5-657e-40e5-b6d4-f90d6c36743e] Running
	I1207 22:56:01.185229  394947 system_pods.go:89] "kube-scheduler-addons-746247" [090303be-b2fa-46c7-bec7-ae11cd33ab78] Running
	I1207 22:56:01.185236  394947 system_pods.go:89] "metrics-server-85b7d694d7-jnsx9" [2733de69-8b13-43ab-8b4e-a11f01ca6694] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 22:56:01.185243  394947 system_pods.go:89] "nvidia-device-plugin-daemonset-gpckr" [db82d55a-0dbb-4348-a938-da80fe468a31] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 22:56:01.185251  394947 system_pods.go:89] "registry-6b586f9694-wsdqp" [56184daa-e3a4-46ca-b017-5a3dd986f623] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1207 22:56:01.185258  394947 system_pods.go:89] "registry-creds-764b6fb674-vl9gn" [fd8e6cfd-a85b-4980-b193-cf4b6f8bc5b4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1207 22:56:01.185268  394947 system_pods.go:89] "registry-proxy-d7n5r" [bfdc5400-c591-460d-89bb-87f432c0b904] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1207 22:56:01.185281  394947 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lg5vk" [d2468d0f-bdb3-4321-a85a-ac7e3fc46b69] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:56:01.185292  394947 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nzqtx" [c2ea9276-b890-436f-9681-f173192e1580] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:56:01.185299  394947 system_pods.go:89] "storage-provisioner" [f3580680-aa34-475b-a6a6-1c280b516ae0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 22:56:01.185319  394947 retry.go:31] will retry after 198.97653ms: missing components: kube-dns
	I1207 22:56:01.390085  394947 system_pods.go:86] 20 kube-system pods found
	I1207 22:56:01.390125  394947 system_pods.go:89] "amd-gpu-device-plugin-kblb2" [0d7d3c61-b559-4b2d-ad9c-0c55bd5a52ee] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1207 22:56:01.390132  394947 system_pods.go:89] "coredns-66bc5c9577-tphvv" [7beb0e82-6dc4-4096-af61-36892f47cffa] Running
	I1207 22:56:01.390152  394947 system_pods.go:89] "csi-hostpath-attacher-0" [a5354250-4aeb-4575-aedb-24c6f8664823] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 22:56:01.390163  394947 system_pods.go:89] "csi-hostpath-resizer-0" [0706c1a6-d865-41e1-b896-5466613da19a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1207 22:56:01.390171  394947 system_pods.go:89] "csi-hostpathplugin-x5hj6" [4b6180c4-31ad-42af-bba8-c8c05417d718] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 22:56:01.390182  394947 system_pods.go:89] "etcd-addons-746247" [e4ee9c9c-4d01-4b73-ac5a-1bbbd97bbe79] Running
	I1207 22:56:01.390192  394947 system_pods.go:89] "kindnet-r872z" [64913453-1fd0-4d9e-80e0-f4e33f99b8ff] Running
	I1207 22:56:01.390198  394947 system_pods.go:89] "kube-apiserver-addons-746247" [501e8522-edbc-4fff-bb71-a85168d6c576] Running
	I1207 22:56:01.390203  394947 system_pods.go:89] "kube-controller-manager-addons-746247" [4ebf58bd-0977-4c58-b77d-e20f01592d9d] Running
	I1207 22:56:01.390211  394947 system_pods.go:89] "kube-ingress-dns-minikube" [b03239fb-2faa-41b2-bc04-248413da0752] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1207 22:56:01.390216  394947 system_pods.go:89] "kube-proxy-j7cvz" [9f89bed5-657e-40e5-b6d4-f90d6c36743e] Running
	I1207 22:56:01.390222  394947 system_pods.go:89] "kube-scheduler-addons-746247" [090303be-b2fa-46c7-bec7-ae11cd33ab78] Running
	I1207 22:56:01.390230  394947 system_pods.go:89] "metrics-server-85b7d694d7-jnsx9" [2733de69-8b13-43ab-8b4e-a11f01ca6694] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1207 22:56:01.390239  394947 system_pods.go:89] "nvidia-device-plugin-daemonset-gpckr" [db82d55a-0dbb-4348-a938-da80fe468a31] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 22:56:01.390255  394947 system_pods.go:89] "registry-6b586f9694-wsdqp" [56184daa-e3a4-46ca-b017-5a3dd986f623] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1207 22:56:01.390269  394947 system_pods.go:89] "registry-creds-764b6fb674-vl9gn" [fd8e6cfd-a85b-4980-b193-cf4b6f8bc5b4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1207 22:56:01.390277  394947 system_pods.go:89] "registry-proxy-d7n5r" [bfdc5400-c591-460d-89bb-87f432c0b904] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1207 22:56:01.390285  394947 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lg5vk" [d2468d0f-bdb3-4321-a85a-ac7e3fc46b69] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:56:01.390303  394947 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nzqtx" [c2ea9276-b890-436f-9681-f173192e1580] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1207 22:56:01.390310  394947 system_pods.go:89] "storage-provisioner" [f3580680-aa34-475b-a6a6-1c280b516ae0] Running
	I1207 22:56:01.390319  394947 system_pods.go:126] duration metric: took 209.59162ms to wait for k8s-apps to be running ...
	I1207 22:56:01.390362  394947 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 22:56:01.390415  394947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 22:56:01.404422  394947 system_svc.go:56] duration metric: took 14.048204ms WaitForService to wait for kubelet
	I1207 22:56:01.404457  394947 kubeadm.go:587] duration metric: took 13.02493717s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 22:56:01.404480  394947 node_conditions.go:102] verifying NodePressure condition ...
	I1207 22:56:01.407624  394947 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 22:56:01.407654  394947 node_conditions.go:123] node cpu capacity is 8
	I1207 22:56:01.407669  394947 node_conditions.go:105] duration metric: took 3.182892ms to run NodePressure ...
	I1207 22:56:01.407686  394947 start.go:242] waiting for startup goroutines ...
	I1207 22:56:01.494832  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:01.522645  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:01.522649  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:01.679447  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:01.995201  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:02.023378  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:02.023565  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:02.179492  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:02.494615  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:02.595859  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:02.595916  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:02.680870  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:02.996634  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:03.023862  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:03.024219  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:03.180019  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:03.495307  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:03.523096  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:03.523195  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:03.680573  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:03.995603  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:04.023908  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:04.023908  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:04.180096  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:04.494130  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:04.523568  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:04.523615  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:04.679867  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:04.995619  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:05.022899  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:05.022929  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:05.179067  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:05.496154  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:05.523072  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:05.523549  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:05.679917  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:05.994806  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:06.024643  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:06.024694  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:06.180229  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:06.494215  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:06.523077  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:06.523105  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:06.679215  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:06.995054  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:07.023688  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:07.024039  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:07.179965  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:07.494965  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:07.522771  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:07.522851  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:07.680080  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:07.994273  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:08.024084  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:08.024141  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:08.179098  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:08.494937  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:08.522911  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:08.522939  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:08.679135  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:08.994206  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:09.022971  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:09.023046  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:09.178711  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:09.495683  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:09.522235  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:09.522393  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:09.679207  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:09.994632  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:10.022879  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:10.022905  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:10.179570  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:10.495384  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:10.523613  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:10.523689  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:10.680196  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:10.995479  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:11.023698  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:11.023723  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:11.179911  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:11.495354  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:11.523120  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:11.523154  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:11.679594  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:11.994809  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:12.022421  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:12.022554  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:12.179691  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:12.495587  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:12.596073  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:12.596350  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:12.679030  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:12.994803  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:13.022674  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:13.022709  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:13.179667  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:13.495055  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:13.522734  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:13.522792  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:13.679922  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:13.995589  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:14.023292  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:14.023350  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:14.179146  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:14.501017  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:14.522916  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:14.523151  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:14.679436  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:14.994630  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:15.023552  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:15.023610  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:15.179948  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:15.495612  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:15.523280  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:15.523341  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:15.679407  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:15.994556  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:16.023574  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:16.023732  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:16.179759  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:16.496082  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:16.523305  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:16.523369  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:16.680218  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:16.995804  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:17.022698  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:17.022871  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:17.179923  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:17.549402  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:17.549437  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:17.549677  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:17.679510  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:17.995187  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:18.023159  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:18.023310  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:18.180318  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:18.495211  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:18.595655  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:18.595692  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:18.679275  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:18.994866  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:19.022902  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:19.023163  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:19.179213  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:19.494526  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:19.523252  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:19.523506  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:19.679405  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:19.995272  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:20.023074  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:20.023389  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:20.178824  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:20.495569  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:20.523159  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:20.523438  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:20.679219  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:20.994639  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:21.023714  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:21.023815  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:21.178856  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:21.495478  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:21.523473  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:21.523541  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:21.679360  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:21.995316  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:22.022780  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:22.022854  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:22.180549  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:22.541773  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:22.542556  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:22.542741  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:22.686106  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:22.995162  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:23.095704  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:23.095704  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:23.179272  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:23.497161  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:23.522762  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:23.522814  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:23.679955  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:23.995461  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:24.023345  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:24.023595  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:24.179292  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:24.495048  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:24.595803  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:24.595823  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:24.679543  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:24.995106  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:25.022971  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:25.023125  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:25.178820  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:25.496897  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:25.597507  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:25.597688  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:25.697760  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:25.994973  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:26.022636  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:26.022842  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:26.179171  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:26.494765  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:26.522695  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:26.522717  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:26.679142  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:26.994282  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:27.023055  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:27.023052  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:27.179126  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:27.495088  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:27.522993  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:27.523029  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:27.678955  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:27.994376  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:28.023013  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:28.023059  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:28.179399  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:28.494468  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:28.523390  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:28.523728  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:28.679736  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:28.995082  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:29.022873  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:29.023116  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:29.178779  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:29.496007  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:29.523361  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:29.523444  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:29.679378  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:29.994630  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:30.023958  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:30.024406  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:30.179876  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:30.495180  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:30.523233  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:30.523257  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:30.679496  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:30.994878  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:31.022671  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:31.022835  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:31.179693  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:31.494892  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:31.522734  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:31.522793  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:31.679736  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:31.994974  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:32.022915  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:32.023064  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:32.180149  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:32.495176  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:32.596061  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:32.596131  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:32.696193  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:32.994769  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:33.022910  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:33.022916  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 22:56:33.179422  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:33.493953  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:33.522466  394947 kapi.go:107] duration metric: took 43.50305903s to wait for kubernetes.io/minikube-addons=registry ...
	I1207 22:56:33.522565  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:33.681341  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:33.994804  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:34.023881  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:34.180801  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:34.533032  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:34.547640  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:34.679664  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:34.995428  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:35.023208  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:35.178958  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:35.494591  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:35.523585  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:35.682247  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:35.997138  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:36.024548  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:36.179492  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:36.496547  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:36.523185  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:36.679115  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:36.994351  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:37.023565  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:37.179919  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:37.494748  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:37.522912  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:37.680159  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:37.994959  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:38.022757  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:38.179377  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:38.494490  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:38.523375  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:38.679432  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:38.995144  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:39.023357  394947 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 22:56:39.179648  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:39.496491  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:39.596840  394947 kapi.go:107] duration metric: took 49.577429437s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1207 22:56:39.679232  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:39.995134  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:40.178873  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:40.493955  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:40.718207  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:40.994616  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:41.179939  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:41.495640  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:41.680389  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:41.994847  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:42.180182  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 22:56:42.494811  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:42.680459  394947 kapi.go:107] duration metric: took 46.004581818s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1207 22:56:42.682229  394947 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-746247 cluster.
	I1207 22:56:42.683867  394947 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1207 22:56:42.687060  394947 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1207 22:56:42.995303  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:43.494736  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:43.994566  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:44.495379  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:44.994398  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:45.494587  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:45.995386  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:46.494493  394947 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 22:56:46.995144  394947 kapi.go:107] duration metric: took 56.504088937s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1207 22:56:46.996992  394947 out.go:179] * Enabled addons: nvidia-device-plugin, ingress-dns, metrics-server, storage-provisioner, amd-gpu-device-plugin, inspektor-gadget, cloud-spanner, yakd, storage-provisioner-rancher, registry-creds, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1207 22:56:46.998234  394947 addons.go:530] duration metric: took 58.618697865s for enable addons: enabled=[nvidia-device-plugin ingress-dns metrics-server storage-provisioner amd-gpu-device-plugin inspektor-gadget cloud-spanner yakd storage-provisioner-rancher registry-creds volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1207 22:56:46.998279  394947 start.go:247] waiting for cluster config update ...
	I1207 22:56:46.998302  394947 start.go:256] writing updated cluster config ...
	I1207 22:56:46.998606  394947 ssh_runner.go:195] Run: rm -f paused
	I1207 22:56:47.002993  394947 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 22:56:47.006417  394947 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tphvv" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:47.010943  394947 pod_ready.go:94] pod "coredns-66bc5c9577-tphvv" is "Ready"
	I1207 22:56:47.010979  394947 pod_ready.go:86] duration metric: took 4.536878ms for pod "coredns-66bc5c9577-tphvv" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:47.013268  394947 pod_ready.go:83] waiting for pod "etcd-addons-746247" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:47.017464  394947 pod_ready.go:94] pod "etcd-addons-746247" is "Ready"
	I1207 22:56:47.017490  394947 pod_ready.go:86] duration metric: took 4.195356ms for pod "etcd-addons-746247" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:47.019467  394947 pod_ready.go:83] waiting for pod "kube-apiserver-addons-746247" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:47.023126  394947 pod_ready.go:94] pod "kube-apiserver-addons-746247" is "Ready"
	I1207 22:56:47.023147  394947 pod_ready.go:86] duration metric: took 3.660703ms for pod "kube-apiserver-addons-746247" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:47.025010  394947 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-746247" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:47.407220  394947 pod_ready.go:94] pod "kube-controller-manager-addons-746247" is "Ready"
	I1207 22:56:47.407248  394947 pod_ready.go:86] duration metric: took 382.220157ms for pod "kube-controller-manager-addons-746247" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:47.607770  394947 pod_ready.go:83] waiting for pod "kube-proxy-j7cvz" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:48.006940  394947 pod_ready.go:94] pod "kube-proxy-j7cvz" is "Ready"
	I1207 22:56:48.006968  394947 pod_ready.go:86] duration metric: took 399.164571ms for pod "kube-proxy-j7cvz" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:48.207440  394947 pod_ready.go:83] waiting for pod "kube-scheduler-addons-746247" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:48.607352  394947 pod_ready.go:94] pod "kube-scheduler-addons-746247" is "Ready"
	I1207 22:56:48.607387  394947 pod_ready.go:86] duration metric: took 399.913886ms for pod "kube-scheduler-addons-746247" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 22:56:48.607404  394947 pod_ready.go:40] duration metric: took 1.604378815s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 22:56:48.654476  394947 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1207 22:56:48.656460  394947 out.go:179] * Done! kubectl is now configured to use "addons-746247" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 07 22:56:45 addons-746247 crio[772]: time="2025-12-07T22:56:45.828807211Z" level=info msg="Starting container: 15d6c69879b1c9d09b82e4b5031bbbf34135b1f9bd979dea9f9f0f72f6fd51c8" id=f27ec19d-7214-412a-8566-05798d8a307f name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 22:56:45 addons-746247 crio[772]: time="2025-12-07T22:56:45.83218953Z" level=info msg="Started container" PID=6097 containerID=15d6c69879b1c9d09b82e4b5031bbbf34135b1f9bd979dea9f9f0f72f6fd51c8 description=kube-system/csi-hostpathplugin-x5hj6/csi-snapshotter id=f27ec19d-7214-412a-8566-05798d8a307f name=/runtime.v1.RuntimeService/StartContainer sandboxID=2df6bb8a249f286d621e24148fefe969a330f023af07ac3b1dbc529990c577c1
	Dec 07 22:56:52 addons-746247 crio[772]: time="2025-12-07T22:56:52.506913103Z" level=info msg="Running pod sandbox: default/busybox/POD" id=50dadf71-7573-45bf-aa50-76875993892c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 22:56:52 addons-746247 crio[772]: time="2025-12-07T22:56:52.50706648Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 22:56:52 addons-746247 crio[772]: time="2025-12-07T22:56:52.513446027Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8226662d2fc5f271c055bd60394aa9c1f0e84db5881bb4826ed9b7c4d4a39e25 UID:5e12dbfd-83fd-46c1-9d58-5e26d50cf46f NetNS:/var/run/netns/ffc7ad06-71d1-415e-9151-d63fca797cc1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00080a468}] Aliases:map[]}"
	Dec 07 22:56:52 addons-746247 crio[772]: time="2025-12-07T22:56:52.513480575Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 07 22:56:52 addons-746247 crio[772]: time="2025-12-07T22:56:52.524377345Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8226662d2fc5f271c055bd60394aa9c1f0e84db5881bb4826ed9b7c4d4a39e25 UID:5e12dbfd-83fd-46c1-9d58-5e26d50cf46f NetNS:/var/run/netns/ffc7ad06-71d1-415e-9151-d63fca797cc1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00080a468}] Aliases:map[]}"
	Dec 07 22:56:52 addons-746247 crio[772]: time="2025-12-07T22:56:52.524518145Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 07 22:56:52 addons-746247 crio[772]: time="2025-12-07T22:56:52.525396625Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 07 22:56:52 addons-746247 crio[772]: time="2025-12-07T22:56:52.52622637Z" level=info msg="Ran pod sandbox 8226662d2fc5f271c055bd60394aa9c1f0e84db5881bb4826ed9b7c4d4a39e25 with infra container: default/busybox/POD" id=50dadf71-7573-45bf-aa50-76875993892c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 22:56:52 addons-746247 crio[772]: time="2025-12-07T22:56:52.527535242Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=18a9ec00-5649-4a9b-814a-da6f80f9362b name=/runtime.v1.ImageService/ImageStatus
	Dec 07 22:56:52 addons-746247 crio[772]: time="2025-12-07T22:56:52.527672895Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=18a9ec00-5649-4a9b-814a-da6f80f9362b name=/runtime.v1.ImageService/ImageStatus
	Dec 07 22:56:52 addons-746247 crio[772]: time="2025-12-07T22:56:52.5277328Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=18a9ec00-5649-4a9b-814a-da6f80f9362b name=/runtime.v1.ImageService/ImageStatus
	Dec 07 22:56:52 addons-746247 crio[772]: time="2025-12-07T22:56:52.528362598Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b0099bf7-f4c5-4165-95a5-85087854a351 name=/runtime.v1.ImageService/PullImage
	Dec 07 22:56:52 addons-746247 crio[772]: time="2025-12-07T22:56:52.529976732Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 07 22:56:54 addons-746247 crio[772]: time="2025-12-07T22:56:54.839606601Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=b0099bf7-f4c5-4165-95a5-85087854a351 name=/runtime.v1.ImageService/PullImage
	Dec 07 22:56:54 addons-746247 crio[772]: time="2025-12-07T22:56:54.840187478Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a3868de7-e409-4973-b092-975c0f70f0c6 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 22:56:54 addons-746247 crio[772]: time="2025-12-07T22:56:54.841619045Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ad50030f-4e1b-445a-adaf-17f37001541c name=/runtime.v1.ImageService/ImageStatus
	Dec 07 22:56:54 addons-746247 crio[772]: time="2025-12-07T22:56:54.845112645Z" level=info msg="Creating container: default/busybox/busybox" id=32db3f27-b064-46a4-bcad-47a596417c81 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 22:56:54 addons-746247 crio[772]: time="2025-12-07T22:56:54.845219642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 22:56:54 addons-746247 crio[772]: time="2025-12-07T22:56:54.851891362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 22:56:54 addons-746247 crio[772]: time="2025-12-07T22:56:54.852448301Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 22:56:54 addons-746247 crio[772]: time="2025-12-07T22:56:54.884382262Z" level=info msg="Created container 84c3df26fdfc509cd479b8c92421f1643f7119113d984f364d393c9136b657a6: default/busybox/busybox" id=32db3f27-b064-46a4-bcad-47a596417c81 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 22:56:54 addons-746247 crio[772]: time="2025-12-07T22:56:54.88503715Z" level=info msg="Starting container: 84c3df26fdfc509cd479b8c92421f1643f7119113d984f364d393c9136b657a6" id=bafef9cd-71a1-4729-82eb-e44e7978ee89 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 22:56:54 addons-746247 crio[772]: time="2025-12-07T22:56:54.887288322Z" level=info msg="Started container" PID=6236 containerID=84c3df26fdfc509cd479b8c92421f1643f7119113d984f364d393c9136b657a6 description=default/busybox/busybox id=bafef9cd-71a1-4729-82eb-e44e7978ee89 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8226662d2fc5f271c055bd60394aa9c1f0e84db5881bb4826ed9b7c4d4a39e25
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	84c3df26fdfc5       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          9 seconds ago        Running             busybox                                  0                   8226662d2fc5f       busybox                                    default
	15d6c69879b1c       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          18 seconds ago       Running             csi-snapshotter                          0                   2df6bb8a249f2       csi-hostpathplugin-x5hj6                   kube-system
	5fb12f5f4df2a       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          19 seconds ago       Running             csi-provisioner                          0                   2df6bb8a249f2       csi-hostpathplugin-x5hj6                   kube-system
	fe56a017640b6       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            20 seconds ago       Running             liveness-probe                           0                   2df6bb8a249f2       csi-hostpathplugin-x5hj6                   kube-system
	504d8b39e428b       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           20 seconds ago       Running             hostpath                                 0                   2df6bb8a249f2       csi-hostpathplugin-x5hj6                   kube-system
	0cc657ef96c6a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 21 seconds ago       Running             gcp-auth                                 0                   a7b56cb6029cb       gcp-auth-78565c9fb4-x8dr5                  gcp-auth
	50ad042517d0a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                24 seconds ago       Running             node-driver-registrar                    0                   2df6bb8a249f2       csi-hostpathplugin-x5hj6                   kube-system
	e4e2013e7e709       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             24 seconds ago       Running             controller                               0                   0a6436af92a21       ingress-nginx-controller-6c8bf45fb-7h5rb   ingress-nginx
	2aa48bdedb241       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            28 seconds ago       Running             gadget                                   0                   193349b1d36b6       gadget-8ktw6                               gadget
	ccc105493c42c       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             30 seconds ago       Exited              patch                                    2                   70e74b543958a       gcp-auth-certs-patch-7jxpl                 gcp-auth
	b28acd3bc252a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              31 seconds ago       Running             registry-proxy                           0                   b38a963d885d0       registry-proxy-d7n5r                       kube-system
	89c985a674b7b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   33 seconds ago       Exited              create                                   0                   7696c3a2531d5       gcp-auth-certs-create-s924n                gcp-auth
	1dad0dc022510       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     34 seconds ago       Running             nvidia-device-plugin-ctr                 0                   25af72303bf3e       nvidia-device-plugin-daemonset-gpckr       kube-system
	7e6ab6bbbad33       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      38 seconds ago       Running             volume-snapshot-controller               0                   4e17f514a68c7       snapshot-controller-7d9fbc56b8-nzqtx       kube-system
	2ee9d403c718a       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      38 seconds ago       Running             volume-snapshot-controller               0                   4bd83b91087d4       snapshot-controller-7d9fbc56b8-lg5vk       kube-system
	d235bae133495       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     39 seconds ago       Running             amd-gpu-device-plugin                    0                   211b1f6a7960a       amd-gpu-device-plugin-kblb2                kube-system
	dd2a1ddd16307       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             41 seconds ago       Running             csi-attacher                             0                   b650eeb7b6a95       csi-hostpath-attacher-0                    kube-system
	b0daa49120f4c       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             42 seconds ago       Running             local-path-provisioner                   0                   6ee9cc074ad55       local-path-provisioner-648f6765c9-5n4rs    local-path-storage
	08fe42979fddb       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              43 seconds ago       Running             csi-resizer                              0                   eaa71e68f3756       csi-hostpath-resizer-0                     kube-system
	79ffbf10d4d6a       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   44 seconds ago       Running             csi-external-health-monitor-controller   0                   2df6bb8a249f2       csi-hostpathplugin-x5hj6                   kube-system
	b21b334597fd7       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              45 seconds ago       Running             yakd                                     0                   4671a413d1e60       yakd-dashboard-5ff678cb9-nkjk8             yakd-dashboard
	0a5bc6342e0fa       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           49 seconds ago       Running             registry                                 0                   2c185b03d5f06       registry-6b586f9694-wsdqp                  kube-system
	a32a551446fdd       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             50 seconds ago       Exited              patch                                    1                   9ecf1e3c45cc0       ingress-nginx-admission-patch-klnc2        ingress-nginx
	268f49610f6f9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   51 seconds ago       Exited              create                                   0                   3d1b80cad1630       ingress-nginx-admission-create-bkb7d       ingress-nginx
	443f71f01193e       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               52 seconds ago       Running             cloud-spanner-emulator                   0                   526627ef47ab1       cloud-spanner-emulator-5bdddb765-8hk6l     default
	125a62d8c60a9       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        56 seconds ago       Running             metrics-server                           0                   5a21cd27baca0       metrics-server-85b7d694d7-jnsx9            kube-system
	f043948674122       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               57 seconds ago       Running             minikube-ingress-dns                     0                   4580123f57f82       kube-ingress-dns-minikube                  kube-system
	c09a0b77cbea1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   75dbd1365fd8f       storage-provisioner                        kube-system
	c7ac4b9dcfe98       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   7a926a976a790       coredns-66bc5c9577-tphvv                   kube-system
	d9470261de6e4       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             About a minute ago   Running             kube-proxy                               0                   4f19d3960bb7c       kube-proxy-j7cvz                           kube-system
	4cd369ec2d01e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   b2733beb80ca1       kindnet-r872z                              kube-system
	2f96412fe3f9d       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             About a minute ago   Running             kube-controller-manager                  0                   cb6682bad036b       kube-controller-manager-addons-746247      kube-system
	070b82a22d636       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             About a minute ago   Running             kube-apiserver                           0                   e1fa5fbe7aec3       kube-apiserver-addons-746247               kube-system
	bbb24b899c6b3       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago   Running             etcd                                     0                   1b31ac2f693d1       etcd-addons-746247                         kube-system
	cb318a4f62348       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             About a minute ago   Running             kube-scheduler                           0                   485463c20fc46       kube-scheduler-addons-746247               kube-system
	
	
	==> coredns [c7ac4b9dcfe980e1f0ca5380837549fae2f8f4737f218aa46ee31003340f1f0e] <==
	[INFO] 10.244.0.18:53900 - 44962 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000121433s
	[INFO] 10.244.0.18:45771 - 6683 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00011807s
	[INFO] 10.244.0.18:45771 - 6366 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000171266s
	[INFO] 10.244.0.18:46168 - 16913 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000059827s
	[INFO] 10.244.0.18:46168 - 16524 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000071425s
	[INFO] 10.244.0.18:42990 - 49029 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000060273s
	[INFO] 10.244.0.18:42990 - 49338 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.00010078s
	[INFO] 10.244.0.18:60255 - 23504 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00006129s
	[INFO] 10.244.0.18:60255 - 23759 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000087682s
	[INFO] 10.244.0.18:35249 - 1240 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00013674s
	[INFO] 10.244.0.18:35249 - 1017 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000113326s
	[INFO] 10.244.0.22:39916 - 63761 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000210063s
	[INFO] 10.244.0.22:54173 - 50911 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000283735s
	[INFO] 10.244.0.22:50389 - 5142 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127951s
	[INFO] 10.244.0.22:40834 - 23523 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123532s
	[INFO] 10.244.0.22:38348 - 60069 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000115321s
	[INFO] 10.244.0.22:44865 - 37183 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000086393s
	[INFO] 10.244.0.22:43591 - 65010 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.005267394s
	[INFO] 10.244.0.22:37649 - 56 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.005332624s
	[INFO] 10.244.0.22:36465 - 57557 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004404219s
	[INFO] 10.244.0.22:48320 - 46661 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004754858s
	[INFO] 10.244.0.22:58183 - 11908 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003985172s
	[INFO] 10.244.0.22:59845 - 29266 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004341968s
	[INFO] 10.244.0.22:37700 - 55173 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000896281s
	[INFO] 10.244.0.22:35759 - 49979 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.002063482s
	
	
	==> describe nodes <==
	Name:               addons-746247
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-746247
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=addons-746247
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T22_55_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-746247
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-746247"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 22:55:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-746247
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 22:56:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 22:56:44 +0000   Sun, 07 Dec 2025 22:55:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 22:56:44 +0000   Sun, 07 Dec 2025 22:55:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 22:56:44 +0000   Sun, 07 Dec 2025 22:55:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 22:56:44 +0000   Sun, 07 Dec 2025 22:56:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-746247
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                e3c766b8-0955-4cdf-b1a6-92b0d064495c
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-5bdddb765-8hk6l      0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  gadget                      gadget-8ktw6                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  gcp-auth                    gcp-auth-78565c9fb4-x8dr5                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         68s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-7h5rb    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         74s
	  kube-system                 amd-gpu-device-plugin-kblb2                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 coredns-66bc5c9577-tphvv                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     75s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 csi-hostpathplugin-x5hj6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 etcd-addons-746247                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         81s
	  kube-system                 kindnet-r872z                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      75s
	  kube-system                 kube-apiserver-addons-746247                250m (3%)     0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-controller-manager-addons-746247       200m (2%)     0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-proxy-j7cvz                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-scheduler-addons-746247                100m (1%)     0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 metrics-server-85b7d694d7-jnsx9             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         75s
	  kube-system                 nvidia-device-plugin-daemonset-gpckr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 registry-6b586f9694-wsdqp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 registry-creds-764b6fb674-vl9gn             0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 registry-proxy-d7n5r                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 snapshot-controller-7d9fbc56b8-lg5vk        0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 snapshot-controller-7d9fbc56b8-nzqtx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  local-path-storage          local-path-provisioner-648f6765c9-5n4rs     0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-nkjk8              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     75s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 73s                kube-proxy       
	  Normal  Starting                 86s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  86s (x8 over 86s)  kubelet          Node addons-746247 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    86s (x8 over 86s)  kubelet          Node addons-746247 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s (x8 over 86s)  kubelet          Node addons-746247 status is now: NodeHasSufficientPID
	  Normal  Starting                 81s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  81s                kubelet          Node addons-746247 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s                kubelet          Node addons-746247 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     81s                kubelet          Node addons-746247 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           76s                node-controller  Node addons-746247 event: Registered Node addons-746247 in Controller
	  Normal  NodeReady                64s                kubelet          Node addons-746247 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff a6 01 af c6 41 4f 08 06
	[Dec 7 22:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 bb b1 a0 5e a8 08 06
	[  +8.965245] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a 81 77 df 76 b3 08 06
	[  +4.785881] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f2 b6 1b dc 31 66 08 06
	[  +0.000336] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 bb b1 a0 5e a8 08 06
	[ +14.387030] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 06 5f 59 2e 13 4a 08 06
	[  +0.000384] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 81 77 df 76 b3 08 06
	[  +1.769034] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 6a 9c 72 77 5f 08 06
	[  +0.000466] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 01 af c6 41 4f 08 06
	[Dec 7 22:51] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 6e 6b 6b 11 60 08 06
	[  +0.101633] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea f6 93 38 ff e7 08 06
	[ +48.116062] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ee 0d 03 dc f4 50 08 06
	[  +0.000377] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea f6 93 38 ff e7 08 06
	
	
	==> etcd [bbb24b899c6b3630a13d72e60f393052186f583f097e132d0109458022915856] <==
	{"level":"warn","ts":"2025-12-07T22:55:40.364525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.371171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.387459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.393955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.401535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.407913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.415531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.422260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.432137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.438509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.445139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.451769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.458876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.467465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.479578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.487473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.494445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:40.543462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:51.077860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:55:51.091590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:56:18.309570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:56:18.317996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T22:56:18.336534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53416","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T22:56:49.284966Z","caller":"traceutil/trace.go:172","msg":"trace[1814580813] transaction","detail":"{read_only:false; response_revision:1243; number_of_response:1; }","duration":"112.948708ms","start":"2025-12-07T22:56:49.171999Z","end":"2025-12-07T22:56:49.284947Z","steps":["trace[1814580813] 'process raft request'  (duration: 112.908673ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-07T22:56:49.284977Z","caller":"traceutil/trace.go:172","msg":"trace[1255087046] transaction","detail":"{read_only:false; response_revision:1242; number_of_response:1; }","duration":"113.294291ms","start":"2025-12-07T22:56:49.171664Z","end":"2025-12-07T22:56:49.284958Z","steps":["trace[1255087046] 'process raft request'  (duration: 113.165448ms)"],"step_count":1}
	
	
	==> gcp-auth [0cc657ef96c6a06ba798c4256933d459aefef6c133966e15175ec4d9bc8c814b] <==
	2025/12/07 22:56:42 GCP Auth Webhook started!
	2025/12/07 22:56:49 Ready to marshal response ...
	2025/12/07 22:56:49 Ready to write response ...
	2025/12/07 22:56:52 Ready to marshal response ...
	2025/12/07 22:56:52 Ready to write response ...
	2025/12/07 22:56:52 Ready to marshal response ...
	2025/12/07 22:56:52 Ready to write response ...
	
	
	==> kernel <==
	 22:57:04 up  1:39,  0 user,  load average: 1.51, 2.00, 2.29
	Linux addons-746247 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4cd369ec2d01ec0d2cfe7dfec0cafb653f048fc6c10abf58dfc2c354f5a55a1e] <==
	I1207 22:55:50.082927       1 main.go:148] setting mtu 1500 for CNI 
	I1207 22:55:50.082946       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 22:55:50.082961       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T22:55:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 22:55:50.381896       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 22:55:50.381963       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 22:55:50.381978       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 22:55:50.382676       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 22:55:50.782917       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 22:55:50.782950       1 metrics.go:72] Registering metrics
	I1207 22:55:50.783020       1 controller.go:711] "Syncing nftables rules"
	I1207 22:56:00.383776       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:56:00.383867       1 main.go:301] handling current node
	I1207 22:56:10.381989       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:56:10.382035       1 main.go:301] handling current node
	I1207 22:56:20.382699       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:56:20.382756       1 main.go:301] handling current node
	I1207 22:56:30.382594       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:56:30.382659       1 main.go:301] handling current node
	I1207 22:56:40.381951       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:56:40.381990       1 main.go:301] handling current node
	I1207 22:56:50.382567       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:56:50.382622       1 main.go:301] handling current node
	I1207 22:57:00.382213       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 22:57:00.382269       1 main.go:301] handling current node
	
	
	==> kube-apiserver [070b82a22d636912841f98c83010d7d2b8a760e29cb7bd78b694310c4e09a191] <==
	E1207 22:56:09.368195       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.247.68:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.247.68:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.247.68:443: connect: connection refused" logger="UnhandledError"
	E1207 22:56:09.369879       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.247.68:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.247.68:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.247.68:443: connect: connection refused" logger="UnhandledError"
	E1207 22:56:09.375849       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.247.68:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.247.68:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.247.68:443: connect: connection refused" logger="UnhandledError"
	W1207 22:56:10.369126       1 handler_proxy.go:99] no RequestInfo found in the context
	E1207 22:56:10.369239       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1207 22:56:10.369266       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 22:56:10.369354       1 handler_proxy.go:99] no RequestInfo found in the context
	E1207 22:56:10.369378       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1207 22:56:10.370388       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1207 22:56:14.403687       1 handler_proxy.go:99] no RequestInfo found in the context
	E1207 22:56:14.403758       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1207 22:56:14.403806       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.247.68:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.247.68:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1207 22:56:14.412707       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1207 22:56:18.309511       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1207 22:56:18.318010       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1207 22:56:18.329785       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1207 22:56:18.336492       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1207 22:57:02.351590       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59272: use of closed network connection
	E1207 22:57:02.509214       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59292: use of closed network connection
	
	
	==> kube-controller-manager [2f96412fe3f9d0a7efea60c8dc6942a2a0b32d17e4b7caa468ec8aaad5361efb] <==
	I1207 22:55:48.290753       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1207 22:55:48.290761       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1207 22:55:48.290835       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1207 22:55:48.290845       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1207 22:55:48.290845       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1207 22:55:48.290861       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1207 22:55:48.290883       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1207 22:55:48.291619       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1207 22:55:48.291654       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1207 22:55:48.294940       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 22:55:48.299196       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1207 22:55:48.304522       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1207 22:55:48.304527       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1207 22:55:48.304607       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1207 22:55:48.304636       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1207 22:55:48.304640       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1207 22:55:48.304645       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1207 22:55:48.311023       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-746247" podCIDRs=["10.244.0.0/24"]
	I1207 22:55:48.311949       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1207 22:56:03.263964       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1207 22:56:18.300565       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1207 22:56:18.300634       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1207 22:56:18.313197       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1207 22:56:18.401423       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 22:56:18.414089       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [d9470261de6e4a9958176fb20e77f6052bc581ef6fa6b17b1c7111575d256855] <==
	I1207 22:55:49.965166       1 server_linux.go:53] "Using iptables proxy"
	I1207 22:55:50.049680       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 22:55:50.150552       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 22:55:50.150589       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 22:55:50.150685       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 22:55:50.178560       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 22:55:50.178643       1 server_linux.go:132] "Using iptables Proxier"
	I1207 22:55:50.187540       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 22:55:50.188065       1 server.go:527] "Version info" version="v1.34.2"
	I1207 22:55:50.188106       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 22:55:50.189394       1 config.go:200] "Starting service config controller"
	I1207 22:55:50.189421       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 22:55:50.189442       1 config.go:309] "Starting node config controller"
	I1207 22:55:50.189447       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 22:55:50.189881       1 config.go:106] "Starting endpoint slice config controller"
	I1207 22:55:50.189917       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 22:55:50.189889       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 22:55:50.189976       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 22:55:50.289675       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 22:55:50.289812       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 22:55:50.292895       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 22:55:50.292911       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [cb318a4f623488d2891c4bf7dee2a7de142b6991456ad8b7f7dbeb036b386a2c] <==
	E1207 22:55:40.962641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1207 22:55:40.975891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 22:55:40.976269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1207 22:55:40.976352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1207 22:55:40.976418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1207 22:55:40.976485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1207 22:55:40.976553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1207 22:55:40.976600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 22:55:40.976646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1207 22:55:40.976773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1207 22:55:40.976813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1207 22:55:40.976853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1207 22:55:40.976985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 22:55:41.809698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1207 22:55:41.874465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1207 22:55:41.880508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1207 22:55:41.887543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 22:55:41.896836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1207 22:55:42.058879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1207 22:55:42.063881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1207 22:55:42.120025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1207 22:55:42.156121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1207 22:55:42.166386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 22:55:42.178507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1207 22:55:44.957718       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 07 22:56:31 addons-746247 kubelet[1278]: I1207 22:56:31.624424    1278 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68z4m\" (UniqueName: \"kubernetes.io/projected/1aacb494-7ca4-4411-b73b-acb602dbc164-kube-api-access-68z4m\") pod \"1aacb494-7ca4-4411-b73b-acb602dbc164\" (UID: \"1aacb494-7ca4-4411-b73b-acb602dbc164\") "
	Dec 07 22:56:31 addons-746247 kubelet[1278]: I1207 22:56:31.627181    1278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aacb494-7ca4-4411-b73b-acb602dbc164-kube-api-access-68z4m" (OuterVolumeSpecName: "kube-api-access-68z4m") pod "1aacb494-7ca4-4411-b73b-acb602dbc164" (UID: "1aacb494-7ca4-4411-b73b-acb602dbc164"). InnerVolumeSpecName "kube-api-access-68z4m". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 07 22:56:31 addons-746247 kubelet[1278]: I1207 22:56:31.725120    1278 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-68z4m\" (UniqueName: \"kubernetes.io/projected/1aacb494-7ca4-4411-b73b-acb602dbc164-kube-api-access-68z4m\") on node \"addons-746247\" DevicePath \"\""
	Dec 07 22:56:32 addons-746247 kubelet[1278]: I1207 22:56:32.459254    1278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7696c3a2531d53ad8425b5a0dcc2ce1645241b88d0261abdcea71b448928c68f"
	Dec 07 22:56:32 addons-746247 kubelet[1278]: E1207 22:56:32.633499    1278 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 07 22:56:32 addons-746247 kubelet[1278]: E1207 22:56:32.633607    1278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd8e6cfd-a85b-4980-b193-cf4b6f8bc5b4-gcr-creds podName:fd8e6cfd-a85b-4980-b193-cf4b6f8bc5b4 nodeName:}" failed. No retries permitted until 2025-12-07 22:57:04.633581102 +0000 UTC m=+81.485645243 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/fd8e6cfd-a85b-4980-b193-cf4b6f8bc5b4-gcr-creds") pod "registry-creds-764b6fb674-vl9gn" (UID: "fd8e6cfd-a85b-4980-b193-cf4b6f8bc5b4") : secret "registry-creds-gcr" not found
	Dec 07 22:56:33 addons-746247 kubelet[1278]: I1207 22:56:33.232803    1278 scope.go:117] "RemoveContainer" containerID="5daa455d9ddcda0612a6f595e93561fa2cc3c0f7b0cf573da92573019361c549"
	Dec 07 22:56:33 addons-746247 kubelet[1278]: I1207 22:56:33.465853    1278 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-d7n5r" secret="" err="secret \"gcp-auth\" not found"
	Dec 07 22:56:33 addons-746247 kubelet[1278]: I1207 22:56:33.467932    1278 scope.go:117] "RemoveContainer" containerID="5daa455d9ddcda0612a6f595e93561fa2cc3c0f7b0cf573da92573019361c549"
	Dec 07 22:56:33 addons-746247 kubelet[1278]: I1207 22:56:33.486241    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-d7n5r" podStartSLOduration=2.001929308 podStartE2EDuration="33.486219234s" podCreationTimestamp="2025-12-07 22:56:00 +0000 UTC" firstStartedPulling="2025-12-07 22:56:01.310293186 +0000 UTC m=+18.162357324" lastFinishedPulling="2025-12-07 22:56:32.794583124 +0000 UTC m=+49.646647250" observedRunningTime="2025-12-07 22:56:33.476529097 +0000 UTC m=+50.328593243" watchObservedRunningTime="2025-12-07 22:56:33.486219234 +0000 UTC m=+50.338283377"
	Dec 07 22:56:34 addons-746247 kubelet[1278]: I1207 22:56:34.472552    1278 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-d7n5r" secret="" err="secret \"gcp-auth\" not found"
	Dec 07 22:56:34 addons-746247 kubelet[1278]: I1207 22:56:34.645171    1278 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9kwz\" (UniqueName: \"kubernetes.io/projected/6bd93034-5267-488b-83a7-dffa7fd7236c-kube-api-access-f9kwz\") pod \"6bd93034-5267-488b-83a7-dffa7fd7236c\" (UID: \"6bd93034-5267-488b-83a7-dffa7fd7236c\") "
	Dec 07 22:56:34 addons-746247 kubelet[1278]: I1207 22:56:34.648110    1278 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd93034-5267-488b-83a7-dffa7fd7236c-kube-api-access-f9kwz" (OuterVolumeSpecName: "kube-api-access-f9kwz") pod "6bd93034-5267-488b-83a7-dffa7fd7236c" (UID: "6bd93034-5267-488b-83a7-dffa7fd7236c"). InnerVolumeSpecName "kube-api-access-f9kwz". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 07 22:56:34 addons-746247 kubelet[1278]: I1207 22:56:34.746085    1278 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f9kwz\" (UniqueName: \"kubernetes.io/projected/6bd93034-5267-488b-83a7-dffa7fd7236c-kube-api-access-f9kwz\") on node \"addons-746247\" DevicePath \"\""
	Dec 07 22:56:35 addons-746247 kubelet[1278]: I1207 22:56:35.477753    1278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70e74b543958ac31ea529d36d3657dbf22a143af69b040db5a4b4eceea8cfffa"
	Dec 07 22:56:35 addons-746247 kubelet[1278]: I1207 22:56:35.495381    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-8ktw6" podStartSLOduration=17.724014458 podStartE2EDuration="46.495355421s" podCreationTimestamp="2025-12-07 22:55:49 +0000 UTC" firstStartedPulling="2025-12-07 22:56:06.593800385 +0000 UTC m=+23.445864508" lastFinishedPulling="2025-12-07 22:56:35.365141327 +0000 UTC m=+52.217205471" observedRunningTime="2025-12-07 22:56:35.495252973 +0000 UTC m=+52.347317139" watchObservedRunningTime="2025-12-07 22:56:35.495355421 +0000 UTC m=+52.347419568"
	Dec 07 22:56:40 addons-746247 kubelet[1278]: I1207 22:56:40.823314    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-7h5rb" podStartSLOduration=28.586942349 podStartE2EDuration="50.82329399s" podCreationTimestamp="2025-12-07 22:55:50 +0000 UTC" firstStartedPulling="2025-12-07 22:56:16.966287514 +0000 UTC m=+33.818351637" lastFinishedPulling="2025-12-07 22:56:39.202639143 +0000 UTC m=+56.054703278" observedRunningTime="2025-12-07 22:56:39.512885466 +0000 UTC m=+56.364949610" watchObservedRunningTime="2025-12-07 22:56:40.82329399 +0000 UTC m=+57.675358133"
	Dec 07 22:56:42 addons-746247 kubelet[1278]: I1207 22:56:42.523786    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-x8dr5" podStartSLOduration=37.146406328 podStartE2EDuration="46.523761692s" podCreationTimestamp="2025-12-07 22:55:56 +0000 UTC" firstStartedPulling="2025-12-07 22:56:32.947203733 +0000 UTC m=+49.799267869" lastFinishedPulling="2025-12-07 22:56:42.324559107 +0000 UTC m=+59.176623233" observedRunningTime="2025-12-07 22:56:42.522249581 +0000 UTC m=+59.374313725" watchObservedRunningTime="2025-12-07 22:56:42.523761692 +0000 UTC m=+59.375825836"
	Dec 07 22:56:44 addons-746247 kubelet[1278]: I1207 22:56:44.267319    1278 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 07 22:56:44 addons-746247 kubelet[1278]: I1207 22:56:44.267394    1278 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 07 22:56:46 addons-746247 kubelet[1278]: I1207 22:56:46.555130    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-x5hj6" podStartSLOduration=1.985037086 podStartE2EDuration="46.555106953s" podCreationTimestamp="2025-12-07 22:56:00 +0000 UTC" firstStartedPulling="2025-12-07 22:56:01.201452101 +0000 UTC m=+18.053516224" lastFinishedPulling="2025-12-07 22:56:45.771521965 +0000 UTC m=+62.623586091" observedRunningTime="2025-12-07 22:56:46.554215537 +0000 UTC m=+63.406279717" watchObservedRunningTime="2025-12-07 22:56:46.555106953 +0000 UTC m=+63.407171097"
	Dec 07 22:56:52 addons-746247 kubelet[1278]: I1207 22:56:52.278133    1278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddhqk\" (UniqueName: \"kubernetes.io/projected/5e12dbfd-83fd-46c1-9d58-5e26d50cf46f-kube-api-access-ddhqk\") pod \"busybox\" (UID: \"5e12dbfd-83fd-46c1-9d58-5e26d50cf46f\") " pod="default/busybox"
	Dec 07 22:56:52 addons-746247 kubelet[1278]: I1207 22:56:52.278236    1278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5e12dbfd-83fd-46c1-9d58-5e26d50cf46f-gcp-creds\") pod \"busybox\" (UID: \"5e12dbfd-83fd-46c1-9d58-5e26d50cf46f\") " pod="default/busybox"
	Dec 07 22:56:55 addons-746247 kubelet[1278]: I1207 22:56:55.586823    1278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.273864737 podStartE2EDuration="3.586801061s" podCreationTimestamp="2025-12-07 22:56:52 +0000 UTC" firstStartedPulling="2025-12-07 22:56:52.528036866 +0000 UTC m=+69.380101001" lastFinishedPulling="2025-12-07 22:56:54.840973187 +0000 UTC m=+71.693037325" observedRunningTime="2025-12-07 22:56:55.586649182 +0000 UTC m=+72.438713337" watchObservedRunningTime="2025-12-07 22:56:55.586801061 +0000 UTC m=+72.438865208"
	Dec 07 22:57:03 addons-746247 kubelet[1278]: I1207 22:57:03.235163    1278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aacb494-7ca4-4411-b73b-acb602dbc164" path="/var/lib/kubelet/pods/1aacb494-7ca4-4411-b73b-acb602dbc164/volumes"
	
	
	==> storage-provisioner [c09a0b77cbea1a4048f11ca0f248eaeb2aceb4d39363d2dda5f5e7c8d69b2bac] <==
	W1207 22:56:39.485794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:56:41.490855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:56:41.495606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:56:43.498582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:56:43.503276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:56:45.507026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:56:45.511416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:56:47.514228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:56:47.519028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:56:49.521767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:56:49.525513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:56:51.528366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:56:51.531668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:56:53.535529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:56:53.539583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:56:55.542479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:56:55.548251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:56:57.551136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:56:57.555288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:56:59.558253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:56:59.563281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:57:01.566625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:57:01.570462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:57:03.573471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 22:57:03.577671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-746247 -n addons-746247
helpers_test.go:269: (dbg) Run:  kubectl --context addons-746247 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: gcp-auth-certs-patch-7jxpl ingress-nginx-admission-create-bkb7d ingress-nginx-admission-patch-klnc2 registry-creds-764b6fb674-vl9gn
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-746247 describe pod gcp-auth-certs-patch-7jxpl ingress-nginx-admission-create-bkb7d ingress-nginx-admission-patch-klnc2 registry-creds-764b6fb674-vl9gn
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-746247 describe pod gcp-auth-certs-patch-7jxpl ingress-nginx-admission-create-bkb7d ingress-nginx-admission-patch-klnc2 registry-creds-764b6fb674-vl9gn: exit status 1 (69.499087ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-7jxpl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-bkb7d" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-klnc2" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-vl9gn" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-746247 describe pod gcp-auth-certs-patch-7jxpl ingress-nginx-admission-create-bkb7d ingress-nginx-admission-patch-klnc2 registry-creds-764b6fb674-vl9gn: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-746247 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-746247 addons disable headlamp --alsologtostderr -v=1: exit status 11 (251.352976ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 22:57:05.117278  403840 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:57:05.117585  403840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:05.117596  403840 out.go:374] Setting ErrFile to fd 2...
	I1207 22:57:05.117600  403840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:05.117791  403840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 22:57:05.118038  403840 mustload.go:66] Loading cluster: addons-746247
	I1207 22:57:05.118421  403840 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:05.118446  403840 addons.go:622] checking whether the cluster is paused
	I1207 22:57:05.118534  403840 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:05.118553  403840 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:57:05.118966  403840 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:57:05.137365  403840 ssh_runner.go:195] Run: systemctl --version
	I1207 22:57:05.137423  403840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:57:05.155815  403840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:57:05.248923  403840 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 22:57:05.248993  403840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 22:57:05.278916  403840 cri.go:89] found id: "15d6c69879b1c9d09b82e4b5031bbbf34135b1f9bd979dea9f9f0f72f6fd51c8"
	I1207 22:57:05.278944  403840 cri.go:89] found id: "5fb12f5f4df2a1240cc8c210ab01b8888c98b0e557e9f3cc7ca744b1cea7d969"
	I1207 22:57:05.278950  403840 cri.go:89] found id: "fe56a017640b65af58831a24e810c5770fc372ade72500a7ef5cde7d37f3ff2a"
	I1207 22:57:05.278955  403840 cri.go:89] found id: "504d8b39e428bcf1fba0674f9f798df8c411b5d88014118f294c3efb546d0697"
	I1207 22:57:05.278960  403840 cri.go:89] found id: "50ad042517d0afe511c861b3ef18e6f89845648a1770b53fd53f3cc495f5a87e"
	I1207 22:57:05.278966  403840 cri.go:89] found id: "b28acd3bc252ae2090058f6c5f790414100d389c691000c749b4cc4ffeaaa79b"
	I1207 22:57:05.278970  403840 cri.go:89] found id: "1dad0dc0225103ed53f3ee4143c3ceff2347afd54237a96641893e36d40210f3"
	I1207 22:57:05.278975  403840 cri.go:89] found id: "7e6ab6bbbad333b2ff082b8ea3bab7762ffc7ef0c2ab04730063a59583be7141"
	I1207 22:57:05.278978  403840 cri.go:89] found id: "2ee9d403c718ad1071a4191fc7909302e0c5c99a980da0841bc028a064062feb"
	I1207 22:57:05.278993  403840 cri.go:89] found id: "d235bae133495f0f39c9d96866f02fe9e69074a4fa3760b3ca2223c3c55f1fdc"
	I1207 22:57:05.278996  403840 cri.go:89] found id: "dd2a1ddd16307b90c23b79922c3c697d8af8058539cc18dde5ec83dbb37624e5"
	I1207 22:57:05.278999  403840 cri.go:89] found id: "08fe42979fddbd1da206b7da0fd7f120a51c3544d5765bb4437a2b3a850217cf"
	I1207 22:57:05.279002  403840 cri.go:89] found id: "79ffbf10d4d6ab250715b396039a119ab1754f8e92841abc0705ff75b50dddad"
	I1207 22:57:05.279004  403840 cri.go:89] found id: "0a5bc6342e0fa615eb4b4c3ff68c6b411b7597a99b09c0ddfbad42f794634308"
	I1207 22:57:05.279007  403840 cri.go:89] found id: "125a62d8c60a9ec08a22d06c8690567a309e13fd8ede4423ac18b3684ed3a1eb"
	I1207 22:57:05.279012  403840 cri.go:89] found id: "f0439486741224d12b7d1a01f1b4080435a3b8ef6cee51988784ad3f75baa93a"
	I1207 22:57:05.279014  403840 cri.go:89] found id: "c09a0b77cbea1a4048f11ca0f248eaeb2aceb4d39363d2dda5f5e7c8d69b2bac"
	I1207 22:57:05.279018  403840 cri.go:89] found id: "c7ac4b9dcfe980e1f0ca5380837549fae2f8f4737f218aa46ee31003340f1f0e"
	I1207 22:57:05.279026  403840 cri.go:89] found id: "d9470261de6e4a9958176fb20e77f6052bc581ef6fa6b17b1c7111575d256855"
	I1207 22:57:05.279029  403840 cri.go:89] found id: "4cd369ec2d01ec0d2cfe7dfec0cafb653f048fc6c10abf58dfc2c354f5a55a1e"
	I1207 22:57:05.279032  403840 cri.go:89] found id: "2f96412fe3f9d0a7efea60c8dc6942a2a0b32d17e4b7caa468ec8aaad5361efb"
	I1207 22:57:05.279034  403840 cri.go:89] found id: "070b82a22d636912841f98c83010d7d2b8a760e29cb7bd78b694310c4e09a191"
	I1207 22:57:05.279037  403840 cri.go:89] found id: "bbb24b899c6b3630a13d72e60f393052186f583f097e132d0109458022915856"
	I1207 22:57:05.279040  403840 cri.go:89] found id: "cb318a4f623488d2891c4bf7dee2a7de142b6991456ad8b7f7dbeb036b386a2c"
	I1207 22:57:05.279043  403840 cri.go:89] found id: ""
	I1207 22:57:05.279083  403840 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 22:57:05.294112  403840 out.go:203] 
	W1207 22:57:05.295524  403840 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1207 22:57:05.295552  403840 out.go:285] * 
	* 
	W1207 22:57:05.299468  403840 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 22:57:05.300941  403840 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-746247 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.54s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-8hk6l" [2c58f1c7-a0d2-4b78-b928-c2795ab3a316] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003831984s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-746247 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-746247 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (243.747261ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 22:57:24.626923  406301 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:57:24.627030  406301 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:24.627035  406301 out.go:374] Setting ErrFile to fd 2...
	I1207 22:57:24.627039  406301 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:24.627272  406301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 22:57:24.627642  406301 mustload.go:66] Loading cluster: addons-746247
	I1207 22:57:24.628042  406301 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:24.628069  406301 addons.go:622] checking whether the cluster is paused
	I1207 22:57:24.628192  406301 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:24.628220  406301 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:57:24.628731  406301 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:57:24.647503  406301 ssh_runner.go:195] Run: systemctl --version
	I1207 22:57:24.647568  406301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:57:24.665294  406301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:57:24.759165  406301 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 22:57:24.759240  406301 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 22:57:24.788788  406301 cri.go:89] found id: "15d6c69879b1c9d09b82e4b5031bbbf34135b1f9bd979dea9f9f0f72f6fd51c8"
	I1207 22:57:24.788826  406301 cri.go:89] found id: "5fb12f5f4df2a1240cc8c210ab01b8888c98b0e557e9f3cc7ca744b1cea7d969"
	I1207 22:57:24.788832  406301 cri.go:89] found id: "fe56a017640b65af58831a24e810c5770fc372ade72500a7ef5cde7d37f3ff2a"
	I1207 22:57:24.788837  406301 cri.go:89] found id: "504d8b39e428bcf1fba0674f9f798df8c411b5d88014118f294c3efb546d0697"
	I1207 22:57:24.788842  406301 cri.go:89] found id: "50ad042517d0afe511c861b3ef18e6f89845648a1770b53fd53f3cc495f5a87e"
	I1207 22:57:24.788847  406301 cri.go:89] found id: "b28acd3bc252ae2090058f6c5f790414100d389c691000c749b4cc4ffeaaa79b"
	I1207 22:57:24.788851  406301 cri.go:89] found id: "1dad0dc0225103ed53f3ee4143c3ceff2347afd54237a96641893e36d40210f3"
	I1207 22:57:24.788854  406301 cri.go:89] found id: "7e6ab6bbbad333b2ff082b8ea3bab7762ffc7ef0c2ab04730063a59583be7141"
	I1207 22:57:24.788856  406301 cri.go:89] found id: "2ee9d403c718ad1071a4191fc7909302e0c5c99a980da0841bc028a064062feb"
	I1207 22:57:24.788871  406301 cri.go:89] found id: "d235bae133495f0f39c9d96866f02fe9e69074a4fa3760b3ca2223c3c55f1fdc"
	I1207 22:57:24.788877  406301 cri.go:89] found id: "dd2a1ddd16307b90c23b79922c3c697d8af8058539cc18dde5ec83dbb37624e5"
	I1207 22:57:24.788887  406301 cri.go:89] found id: "08fe42979fddbd1da206b7da0fd7f120a51c3544d5765bb4437a2b3a850217cf"
	I1207 22:57:24.788891  406301 cri.go:89] found id: "79ffbf10d4d6ab250715b396039a119ab1754f8e92841abc0705ff75b50dddad"
	I1207 22:57:24.788899  406301 cri.go:89] found id: "0a5bc6342e0fa615eb4b4c3ff68c6b411b7597a99b09c0ddfbad42f794634308"
	I1207 22:57:24.788904  406301 cri.go:89] found id: "125a62d8c60a9ec08a22d06c8690567a309e13fd8ede4423ac18b3684ed3a1eb"
	I1207 22:57:24.788921  406301 cri.go:89] found id: "f0439486741224d12b7d1a01f1b4080435a3b8ef6cee51988784ad3f75baa93a"
	I1207 22:57:24.788929  406301 cri.go:89] found id: "c09a0b77cbea1a4048f11ca0f248eaeb2aceb4d39363d2dda5f5e7c8d69b2bac"
	I1207 22:57:24.788935  406301 cri.go:89] found id: "c7ac4b9dcfe980e1f0ca5380837549fae2f8f4737f218aa46ee31003340f1f0e"
	I1207 22:57:24.788938  406301 cri.go:89] found id: "d9470261de6e4a9958176fb20e77f6052bc581ef6fa6b17b1c7111575d256855"
	I1207 22:57:24.788941  406301 cri.go:89] found id: "4cd369ec2d01ec0d2cfe7dfec0cafb653f048fc6c10abf58dfc2c354f5a55a1e"
	I1207 22:57:24.788946  406301 cri.go:89] found id: "2f96412fe3f9d0a7efea60c8dc6942a2a0b32d17e4b7caa468ec8aaad5361efb"
	I1207 22:57:24.788953  406301 cri.go:89] found id: "070b82a22d636912841f98c83010d7d2b8a760e29cb7bd78b694310c4e09a191"
	I1207 22:57:24.788958  406301 cri.go:89] found id: "bbb24b899c6b3630a13d72e60f393052186f583f097e132d0109458022915856"
	I1207 22:57:24.788966  406301 cri.go:89] found id: "cb318a4f623488d2891c4bf7dee2a7de142b6991456ad8b7f7dbeb036b386a2c"
	I1207 22:57:24.788970  406301 cri.go:89] found id: ""
	I1207 22:57:24.789023  406301 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 22:57:24.803570  406301 out.go:203] 
	W1207 22:57:24.804806  406301 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1207 22:57:24.804821  406301 out.go:285] * 
	* 
	W1207 22:57:24.808808  406301 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 22:57:24.810015  406301 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-746247 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.12s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-746247 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-746247 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-746247 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [6c71bce7-a49b-4e94-b5a4-17c2fd29b675] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [6c71bce7-a49b-4e94-b5a4-17c2fd29b675] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [6c71bce7-a49b-4e94-b5a4-17c2fd29b675] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.002882048s
addons_test.go:967: (dbg) Run:  kubectl --context addons-746247 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-746247 ssh "cat /opt/local-path-provisioner/pvc-6cdaae25-a8c6-4a95-9d95-59adcfad1439_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-746247 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-746247 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-746247 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-746247 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (252.602296ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 22:57:19.367128  405902 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:57:19.367433  405902 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:19.367447  405902 out.go:374] Setting ErrFile to fd 2...
	I1207 22:57:19.367451  405902 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:19.367734  405902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 22:57:19.368071  405902 mustload.go:66] Loading cluster: addons-746247
	I1207 22:57:19.368430  405902 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:19.368453  405902 addons.go:622] checking whether the cluster is paused
	I1207 22:57:19.368559  405902 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:19.368585  405902 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:57:19.369010  405902 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:57:19.387125  405902 ssh_runner.go:195] Run: systemctl --version
	I1207 22:57:19.387193  405902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:57:19.406748  405902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:57:19.500952  405902 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 22:57:19.501038  405902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 22:57:19.532277  405902 cri.go:89] found id: "15d6c69879b1c9d09b82e4b5031bbbf34135b1f9bd979dea9f9f0f72f6fd51c8"
	I1207 22:57:19.532302  405902 cri.go:89] found id: "5fb12f5f4df2a1240cc8c210ab01b8888c98b0e557e9f3cc7ca744b1cea7d969"
	I1207 22:57:19.532309  405902 cri.go:89] found id: "fe56a017640b65af58831a24e810c5770fc372ade72500a7ef5cde7d37f3ff2a"
	I1207 22:57:19.532314  405902 cri.go:89] found id: "504d8b39e428bcf1fba0674f9f798df8c411b5d88014118f294c3efb546d0697"
	I1207 22:57:19.532319  405902 cri.go:89] found id: "50ad042517d0afe511c861b3ef18e6f89845648a1770b53fd53f3cc495f5a87e"
	I1207 22:57:19.532343  405902 cri.go:89] found id: "b28acd3bc252ae2090058f6c5f790414100d389c691000c749b4cc4ffeaaa79b"
	I1207 22:57:19.532348  405902 cri.go:89] found id: "1dad0dc0225103ed53f3ee4143c3ceff2347afd54237a96641893e36d40210f3"
	I1207 22:57:19.532353  405902 cri.go:89] found id: "7e6ab6bbbad333b2ff082b8ea3bab7762ffc7ef0c2ab04730063a59583be7141"
	I1207 22:57:19.532357  405902 cri.go:89] found id: "2ee9d403c718ad1071a4191fc7909302e0c5c99a980da0841bc028a064062feb"
	I1207 22:57:19.532372  405902 cri.go:89] found id: "d235bae133495f0f39c9d96866f02fe9e69074a4fa3760b3ca2223c3c55f1fdc"
	I1207 22:57:19.532379  405902 cri.go:89] found id: "dd2a1ddd16307b90c23b79922c3c697d8af8058539cc18dde5ec83dbb37624e5"
	I1207 22:57:19.532382  405902 cri.go:89] found id: "08fe42979fddbd1da206b7da0fd7f120a51c3544d5765bb4437a2b3a850217cf"
	I1207 22:57:19.532385  405902 cri.go:89] found id: "79ffbf10d4d6ab250715b396039a119ab1754f8e92841abc0705ff75b50dddad"
	I1207 22:57:19.532388  405902 cri.go:89] found id: "0a5bc6342e0fa615eb4b4c3ff68c6b411b7597a99b09c0ddfbad42f794634308"
	I1207 22:57:19.532391  405902 cri.go:89] found id: "125a62d8c60a9ec08a22d06c8690567a309e13fd8ede4423ac18b3684ed3a1eb"
	I1207 22:57:19.532395  405902 cri.go:89] found id: "f0439486741224d12b7d1a01f1b4080435a3b8ef6cee51988784ad3f75baa93a"
	I1207 22:57:19.532400  405902 cri.go:89] found id: "c09a0b77cbea1a4048f11ca0f248eaeb2aceb4d39363d2dda5f5e7c8d69b2bac"
	I1207 22:57:19.532406  405902 cri.go:89] found id: "c7ac4b9dcfe980e1f0ca5380837549fae2f8f4737f218aa46ee31003340f1f0e"
	I1207 22:57:19.532409  405902 cri.go:89] found id: "d9470261de6e4a9958176fb20e77f6052bc581ef6fa6b17b1c7111575d256855"
	I1207 22:57:19.532411  405902 cri.go:89] found id: "4cd369ec2d01ec0d2cfe7dfec0cafb653f048fc6c10abf58dfc2c354f5a55a1e"
	I1207 22:57:19.532414  405902 cri.go:89] found id: "2f96412fe3f9d0a7efea60c8dc6942a2a0b32d17e4b7caa468ec8aaad5361efb"
	I1207 22:57:19.532417  405902 cri.go:89] found id: "070b82a22d636912841f98c83010d7d2b8a760e29cb7bd78b694310c4e09a191"
	I1207 22:57:19.532420  405902 cri.go:89] found id: "bbb24b899c6b3630a13d72e60f393052186f583f097e132d0109458022915856"
	I1207 22:57:19.532423  405902 cri.go:89] found id: "cb318a4f623488d2891c4bf7dee2a7de142b6991456ad8b7f7dbeb036b386a2c"
	I1207 22:57:19.532426  405902 cri.go:89] found id: ""
	I1207 22:57:19.532492  405902 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 22:57:19.548089  405902 out.go:203] 
	W1207 22:57:19.549604  405902 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1207 22:57:19.549633  405902 out.go:285] * 
	* 
	W1207 22:57:19.553775  405902 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 22:57:19.555468  405902 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-746247 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.12s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-gpckr" [db82d55a-0dbb-4348-a938-da80fe468a31] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003603148s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-746247 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-746247 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (297.807388ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 22:57:20.950800  406089 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:57:20.950958  406089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:20.950971  406089 out.go:374] Setting ErrFile to fd 2...
	I1207 22:57:20.950977  406089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:20.951251  406089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 22:57:20.951628  406089 mustload.go:66] Loading cluster: addons-746247
	I1207 22:57:20.952144  406089 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:20.952176  406089 addons.go:622] checking whether the cluster is paused
	I1207 22:57:20.952322  406089 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:20.952367  406089 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:57:20.952941  406089 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:57:20.976912  406089 ssh_runner.go:195] Run: systemctl --version
	I1207 22:57:20.976990  406089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:57:21.001749  406089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:57:21.104070  406089 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 22:57:21.104171  406089 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 22:57:21.141350  406089 cri.go:89] found id: "15d6c69879b1c9d09b82e4b5031bbbf34135b1f9bd979dea9f9f0f72f6fd51c8"
	I1207 22:57:21.141389  406089 cri.go:89] found id: "5fb12f5f4df2a1240cc8c210ab01b8888c98b0e557e9f3cc7ca744b1cea7d969"
	I1207 22:57:21.141395  406089 cri.go:89] found id: "fe56a017640b65af58831a24e810c5770fc372ade72500a7ef5cde7d37f3ff2a"
	I1207 22:57:21.141400  406089 cri.go:89] found id: "504d8b39e428bcf1fba0674f9f798df8c411b5d88014118f294c3efb546d0697"
	I1207 22:57:21.141405  406089 cri.go:89] found id: "50ad042517d0afe511c861b3ef18e6f89845648a1770b53fd53f3cc495f5a87e"
	I1207 22:57:21.141411  406089 cri.go:89] found id: "b28acd3bc252ae2090058f6c5f790414100d389c691000c749b4cc4ffeaaa79b"
	I1207 22:57:21.141416  406089 cri.go:89] found id: "1dad0dc0225103ed53f3ee4143c3ceff2347afd54237a96641893e36d40210f3"
	I1207 22:57:21.141420  406089 cri.go:89] found id: "7e6ab6bbbad333b2ff082b8ea3bab7762ffc7ef0c2ab04730063a59583be7141"
	I1207 22:57:21.141425  406089 cri.go:89] found id: "2ee9d403c718ad1071a4191fc7909302e0c5c99a980da0841bc028a064062feb"
	I1207 22:57:21.141448  406089 cri.go:89] found id: "d235bae133495f0f39c9d96866f02fe9e69074a4fa3760b3ca2223c3c55f1fdc"
	I1207 22:57:21.141457  406089 cri.go:89] found id: "dd2a1ddd16307b90c23b79922c3c697d8af8058539cc18dde5ec83dbb37624e5"
	I1207 22:57:21.141461  406089 cri.go:89] found id: "08fe42979fddbd1da206b7da0fd7f120a51c3544d5765bb4437a2b3a850217cf"
	I1207 22:57:21.141466  406089 cri.go:89] found id: "79ffbf10d4d6ab250715b396039a119ab1754f8e92841abc0705ff75b50dddad"
	I1207 22:57:21.141470  406089 cri.go:89] found id: "0a5bc6342e0fa615eb4b4c3ff68c6b411b7597a99b09c0ddfbad42f794634308"
	I1207 22:57:21.141474  406089 cri.go:89] found id: "125a62d8c60a9ec08a22d06c8690567a309e13fd8ede4423ac18b3684ed3a1eb"
	I1207 22:57:21.141489  406089 cri.go:89] found id: "f0439486741224d12b7d1a01f1b4080435a3b8ef6cee51988784ad3f75baa93a"
	I1207 22:57:21.141499  406089 cri.go:89] found id: "c09a0b77cbea1a4048f11ca0f248eaeb2aceb4d39363d2dda5f5e7c8d69b2bac"
	I1207 22:57:21.141505  406089 cri.go:89] found id: "c7ac4b9dcfe980e1f0ca5380837549fae2f8f4737f218aa46ee31003340f1f0e"
	I1207 22:57:21.141510  406089 cri.go:89] found id: "d9470261de6e4a9958176fb20e77f6052bc581ef6fa6b17b1c7111575d256855"
	I1207 22:57:21.141515  406089 cri.go:89] found id: "4cd369ec2d01ec0d2cfe7dfec0cafb653f048fc6c10abf58dfc2c354f5a55a1e"
	I1207 22:57:21.141522  406089 cri.go:89] found id: "2f96412fe3f9d0a7efea60c8dc6942a2a0b32d17e4b7caa468ec8aaad5361efb"
	I1207 22:57:21.141527  406089 cri.go:89] found id: "070b82a22d636912841f98c83010d7d2b8a760e29cb7bd78b694310c4e09a191"
	I1207 22:57:21.141531  406089 cri.go:89] found id: "bbb24b899c6b3630a13d72e60f393052186f583f097e132d0109458022915856"
	I1207 22:57:21.141535  406089 cri.go:89] found id: "cb318a4f623488d2891c4bf7dee2a7de142b6991456ad8b7f7dbeb036b386a2c"
	I1207 22:57:21.141539  406089 cri.go:89] found id: ""
	I1207 22:57:21.141608  406089 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 22:57:21.159947  406089 out.go:203] 
	W1207 22:57:21.161249  406089 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1207 22:57:21.161288  406089 out.go:285] * 
	* 
	W1207 22:57:21.166989  406089 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 22:57:21.168449  406089 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-746247 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.30s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-nkjk8" [70529023-bf2a-4fad-b711-d6e4d9a71b2f] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00354853s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-746247 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-746247 addons disable yakd --alsologtostderr -v=1: exit status 11 (257.58604ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 22:57:15.676107  405281 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:57:15.676449  405281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:15.676460  405281 out.go:374] Setting ErrFile to fd 2...
	I1207 22:57:15.676466  405281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:15.676782  405281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 22:57:15.677131  405281 mustload.go:66] Loading cluster: addons-746247
	I1207 22:57:15.677553  405281 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:15.677579  405281 addons.go:622] checking whether the cluster is paused
	I1207 22:57:15.677671  405281 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:15.677687  405281 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:57:15.678061  405281 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:57:15.699028  405281 ssh_runner.go:195] Run: systemctl --version
	I1207 22:57:15.699110  405281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:57:15.717688  405281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:57:15.811975  405281 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 22:57:15.812071  405281 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 22:57:15.843652  405281 cri.go:89] found id: "15d6c69879b1c9d09b82e4b5031bbbf34135b1f9bd979dea9f9f0f72f6fd51c8"
	I1207 22:57:15.843674  405281 cri.go:89] found id: "5fb12f5f4df2a1240cc8c210ab01b8888c98b0e557e9f3cc7ca744b1cea7d969"
	I1207 22:57:15.843678  405281 cri.go:89] found id: "fe56a017640b65af58831a24e810c5770fc372ade72500a7ef5cde7d37f3ff2a"
	I1207 22:57:15.843681  405281 cri.go:89] found id: "504d8b39e428bcf1fba0674f9f798df8c411b5d88014118f294c3efb546d0697"
	I1207 22:57:15.843684  405281 cri.go:89] found id: "50ad042517d0afe511c861b3ef18e6f89845648a1770b53fd53f3cc495f5a87e"
	I1207 22:57:15.843689  405281 cri.go:89] found id: "b28acd3bc252ae2090058f6c5f790414100d389c691000c749b4cc4ffeaaa79b"
	I1207 22:57:15.843693  405281 cri.go:89] found id: "1dad0dc0225103ed53f3ee4143c3ceff2347afd54237a96641893e36d40210f3"
	I1207 22:57:15.843697  405281 cri.go:89] found id: "7e6ab6bbbad333b2ff082b8ea3bab7762ffc7ef0c2ab04730063a59583be7141"
	I1207 22:57:15.843701  405281 cri.go:89] found id: "2ee9d403c718ad1071a4191fc7909302e0c5c99a980da0841bc028a064062feb"
	I1207 22:57:15.843723  405281 cri.go:89] found id: "d235bae133495f0f39c9d96866f02fe9e69074a4fa3760b3ca2223c3c55f1fdc"
	I1207 22:57:15.843730  405281 cri.go:89] found id: "dd2a1ddd16307b90c23b79922c3c697d8af8058539cc18dde5ec83dbb37624e5"
	I1207 22:57:15.843733  405281 cri.go:89] found id: "08fe42979fddbd1da206b7da0fd7f120a51c3544d5765bb4437a2b3a850217cf"
	I1207 22:57:15.843735  405281 cri.go:89] found id: "79ffbf10d4d6ab250715b396039a119ab1754f8e92841abc0705ff75b50dddad"
	I1207 22:57:15.843738  405281 cri.go:89] found id: "0a5bc6342e0fa615eb4b4c3ff68c6b411b7597a99b09c0ddfbad42f794634308"
	I1207 22:57:15.843741  405281 cri.go:89] found id: "125a62d8c60a9ec08a22d06c8690567a309e13fd8ede4423ac18b3684ed3a1eb"
	I1207 22:57:15.843746  405281 cri.go:89] found id: "f0439486741224d12b7d1a01f1b4080435a3b8ef6cee51988784ad3f75baa93a"
	I1207 22:57:15.843751  405281 cri.go:89] found id: "c09a0b77cbea1a4048f11ca0f248eaeb2aceb4d39363d2dda5f5e7c8d69b2bac"
	I1207 22:57:15.843756  405281 cri.go:89] found id: "c7ac4b9dcfe980e1f0ca5380837549fae2f8f4737f218aa46ee31003340f1f0e"
	I1207 22:57:15.843759  405281 cri.go:89] found id: "d9470261de6e4a9958176fb20e77f6052bc581ef6fa6b17b1c7111575d256855"
	I1207 22:57:15.843761  405281 cri.go:89] found id: "4cd369ec2d01ec0d2cfe7dfec0cafb653f048fc6c10abf58dfc2c354f5a55a1e"
	I1207 22:57:15.843764  405281 cri.go:89] found id: "2f96412fe3f9d0a7efea60c8dc6942a2a0b32d17e4b7caa468ec8aaad5361efb"
	I1207 22:57:15.843772  405281 cri.go:89] found id: "070b82a22d636912841f98c83010d7d2b8a760e29cb7bd78b694310c4e09a191"
	I1207 22:57:15.843777  405281 cri.go:89] found id: "bbb24b899c6b3630a13d72e60f393052186f583f097e132d0109458022915856"
	I1207 22:57:15.843780  405281 cri.go:89] found id: "cb318a4f623488d2891c4bf7dee2a7de142b6991456ad8b7f7dbeb036b386a2c"
	I1207 22:57:15.843783  405281 cri.go:89] found id: ""
	I1207 22:57:15.843822  405281 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 22:57:15.859517  405281 out.go:203] 
	W1207 22:57:15.860507  405281 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1207 22:57:15.860532  405281 out.go:285] * 
	* 
	W1207 22:57:15.864431  405281 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 22:57:15.865615  405281 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-746247 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.26s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-kblb2" [0d7d3c61-b559-4b2d-ad9c-0c55bd5a52ee] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003789321s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-746247 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-746247 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (251.424963ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 22:57:08.835438  404324 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:57:08.835556  404324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:08.835565  404324 out.go:374] Setting ErrFile to fd 2...
	I1207 22:57:08.835570  404324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:57:08.835777  404324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 22:57:08.836050  404324 mustload.go:66] Loading cluster: addons-746247
	I1207 22:57:08.836390  404324 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:08.836416  404324 addons.go:622] checking whether the cluster is paused
	I1207 22:57:08.836501  404324 config.go:182] Loaded profile config "addons-746247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 22:57:08.836525  404324 host.go:66] Checking if "addons-746247" exists ...
	I1207 22:57:08.836911  404324 cli_runner.go:164] Run: docker container inspect addons-746247 --format={{.State.Status}}
	I1207 22:57:08.856137  404324 ssh_runner.go:195] Run: systemctl --version
	I1207 22:57:08.856193  404324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-746247
	I1207 22:57:08.874114  404324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/addons-746247/id_rsa Username:docker}
	I1207 22:57:08.967164  404324 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 22:57:08.967256  404324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 22:57:08.997060  404324 cri.go:89] found id: "15d6c69879b1c9d09b82e4b5031bbbf34135b1f9bd979dea9f9f0f72f6fd51c8"
	I1207 22:57:08.997084  404324 cri.go:89] found id: "5fb12f5f4df2a1240cc8c210ab01b8888c98b0e557e9f3cc7ca744b1cea7d969"
	I1207 22:57:08.997090  404324 cri.go:89] found id: "fe56a017640b65af58831a24e810c5770fc372ade72500a7ef5cde7d37f3ff2a"
	I1207 22:57:08.997096  404324 cri.go:89] found id: "504d8b39e428bcf1fba0674f9f798df8c411b5d88014118f294c3efb546d0697"
	I1207 22:57:08.997101  404324 cri.go:89] found id: "50ad042517d0afe511c861b3ef18e6f89845648a1770b53fd53f3cc495f5a87e"
	I1207 22:57:08.997107  404324 cri.go:89] found id: "b28acd3bc252ae2090058f6c5f790414100d389c691000c749b4cc4ffeaaa79b"
	I1207 22:57:08.997111  404324 cri.go:89] found id: "1dad0dc0225103ed53f3ee4143c3ceff2347afd54237a96641893e36d40210f3"
	I1207 22:57:08.997116  404324 cri.go:89] found id: "7e6ab6bbbad333b2ff082b8ea3bab7762ffc7ef0c2ab04730063a59583be7141"
	I1207 22:57:08.997121  404324 cri.go:89] found id: "2ee9d403c718ad1071a4191fc7909302e0c5c99a980da0841bc028a064062feb"
	I1207 22:57:08.997128  404324 cri.go:89] found id: "d235bae133495f0f39c9d96866f02fe9e69074a4fa3760b3ca2223c3c55f1fdc"
	I1207 22:57:08.997133  404324 cri.go:89] found id: "dd2a1ddd16307b90c23b79922c3c697d8af8058539cc18dde5ec83dbb37624e5"
	I1207 22:57:08.997215  404324 cri.go:89] found id: "08fe42979fddbd1da206b7da0fd7f120a51c3544d5765bb4437a2b3a850217cf"
	I1207 22:57:08.997246  404324 cri.go:89] found id: "79ffbf10d4d6ab250715b396039a119ab1754f8e92841abc0705ff75b50dddad"
	I1207 22:57:08.997252  404324 cri.go:89] found id: "0a5bc6342e0fa615eb4b4c3ff68c6b411b7597a99b09c0ddfbad42f794634308"
	I1207 22:57:08.997285  404324 cri.go:89] found id: "125a62d8c60a9ec08a22d06c8690567a309e13fd8ede4423ac18b3684ed3a1eb"
	I1207 22:57:08.997306  404324 cri.go:89] found id: "f0439486741224d12b7d1a01f1b4080435a3b8ef6cee51988784ad3f75baa93a"
	I1207 22:57:08.997317  404324 cri.go:89] found id: "c09a0b77cbea1a4048f11ca0f248eaeb2aceb4d39363d2dda5f5e7c8d69b2bac"
	I1207 22:57:08.997337  404324 cri.go:89] found id: "c7ac4b9dcfe980e1f0ca5380837549fae2f8f4737f218aa46ee31003340f1f0e"
	I1207 22:57:08.997342  404324 cri.go:89] found id: "d9470261de6e4a9958176fb20e77f6052bc581ef6fa6b17b1c7111575d256855"
	I1207 22:57:08.997346  404324 cri.go:89] found id: "4cd369ec2d01ec0d2cfe7dfec0cafb653f048fc6c10abf58dfc2c354f5a55a1e"
	I1207 22:57:08.997354  404324 cri.go:89] found id: "2f96412fe3f9d0a7efea60c8dc6942a2a0b32d17e4b7caa468ec8aaad5361efb"
	I1207 22:57:08.997358  404324 cri.go:89] found id: "070b82a22d636912841f98c83010d7d2b8a760e29cb7bd78b694310c4e09a191"
	I1207 22:57:08.997363  404324 cri.go:89] found id: "bbb24b899c6b3630a13d72e60f393052186f583f097e132d0109458022915856"
	I1207 22:57:08.997379  404324 cri.go:89] found id: "cb318a4f623488d2891c4bf7dee2a7de142b6991456ad8b7f7dbeb036b386a2c"
	I1207 22:57:08.997387  404324 cri.go:89] found id: ""
	I1207 22:57:08.997436  404324 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 22:57:09.012520  404324 out.go:203] 
	W1207 22:57:09.013804  404324 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T22:57:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1207 22:57:09.013820  404324 out.go:285] * 
	* 
	W1207 22:57:09.017776  404324 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 22:57:09.019283  404324 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-746247 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (263.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1207 23:12:00.472209  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:13:18.418570  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:13:22.394383  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:13:46.121130  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:15:38.534098  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:16:06.236247  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-907658 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (4m21.242091166s)

                                                
                                                
-- stdout --
	* [ha-907658] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-907658" primary control-plane node in "ha-907658" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	* Enabled addons: 
	
	* Starting "ha-907658-m02" control-plane node in "ha-907658" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-907658-m04" worker node in "ha-907658" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:11:52.723208  487084 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:11:52.723342  487084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:11:52.723354  487084 out.go:374] Setting ErrFile to fd 2...
	I1207 23:11:52.723361  487084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:11:52.723559  487084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:11:52.724064  487084 out.go:368] Setting JSON to false
	I1207 23:11:52.725035  487084 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6857,"bootTime":1765142256,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:11:52.725102  487084 start.go:143] virtualization: kvm guest
	I1207 23:11:52.726965  487084 out.go:179] * [ha-907658] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:11:52.728170  487084 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:11:52.728167  487084 notify.go:221] Checking for updates...
	I1207 23:11:52.730209  487084 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:11:52.731286  487084 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:11:52.732435  487084 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:11:52.733509  487084 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:11:52.734621  487084 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:11:52.736265  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:11:52.736931  487084 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:11:52.761948  487084 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:11:52.762088  487084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:11:52.815796  487084 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-07 23:11:52.805859782 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:11:52.815895  487084 docker.go:319] overlay module found
	I1207 23:11:52.818644  487084 out.go:179] * Using the docker driver based on existing profile
	I1207 23:11:52.819812  487084 start.go:309] selected driver: docker
	I1207 23:11:52.819828  487084 start.go:927] validating driver "docker" against &{Name:ha-907658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:11:52.819961  487084 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:11:52.820059  487084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:11:52.873900  487084 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-07 23:11:52.864641727 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:11:52.874579  487084 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:11:52.874614  487084 cni.go:84] Creating CNI manager for ""
	I1207 23:11:52.874670  487084 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1207 23:11:52.874722  487084 start.go:353] cluster config:
	{Name:ha-907658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:11:52.876967  487084 out.go:179] * Starting "ha-907658" primary control-plane node in "ha-907658" cluster
	I1207 23:11:52.877923  487084 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:11:52.878975  487084 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:11:52.880201  487084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:11:52.880231  487084 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1207 23:11:52.880239  487084 cache.go:65] Caching tarball of preloaded images
	I1207 23:11:52.880300  487084 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:11:52.880362  487084 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:11:52.880377  487084 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1207 23:11:52.880537  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:11:52.900771  487084 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:11:52.900792  487084 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:11:52.900810  487084 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:11:52.900849  487084 start.go:360] acquireMachinesLock for ha-907658: {Name:mkd7016770bc40ef9cd544023d232b92bc7cf832 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:11:52.900927  487084 start.go:364] duration metric: took 42.672µs to acquireMachinesLock for "ha-907658"
	I1207 23:11:52.900952  487084 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:11:52.900961  487084 fix.go:54] fixHost starting: 
	I1207 23:11:52.901168  487084 cli_runner.go:164] Run: docker container inspect ha-907658 --format={{.State.Status}}
	I1207 23:11:52.918459  487084 fix.go:112] recreateIfNeeded on ha-907658: state=Stopped err=<nil>
	W1207 23:11:52.918485  487084 fix.go:138] unexpected machine state, will restart: <nil>
	I1207 23:11:52.920300  487084 out.go:252] * Restarting existing docker container for "ha-907658" ...
	I1207 23:11:52.920381  487084 cli_runner.go:164] Run: docker start ha-907658
	I1207 23:11:53.154762  487084 cli_runner.go:164] Run: docker container inspect ha-907658 --format={{.State.Status}}
	I1207 23:11:53.172884  487084 kic.go:430] container "ha-907658" state is running.
	I1207 23:11:53.173368  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658
	I1207 23:11:53.192850  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:11:53.193082  487084 machine.go:94] provisionDockerMachine start ...
	I1207 23:11:53.193169  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:53.211683  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:11:53.211988  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1207 23:11:53.212008  487084 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:11:53.212567  487084 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40796->127.0.0.1:33213: read: connection reset by peer
	I1207 23:11:56.342986  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658
	
	I1207 23:11:56.343016  487084 ubuntu.go:182] provisioning hostname "ha-907658"
	I1207 23:11:56.343087  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:56.361678  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:11:56.361914  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1207 23:11:56.361928  487084 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-907658 && echo "ha-907658" | sudo tee /etc/hostname
	I1207 23:11:56.498208  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658
	
	I1207 23:11:56.498287  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:56.517144  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:11:56.517409  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1207 23:11:56.517428  487084 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-907658' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-907658/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-907658' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:11:56.645103  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:11:56.645138  487084 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:11:56.645173  487084 ubuntu.go:190] setting up certificates
	I1207 23:11:56.645187  487084 provision.go:84] configureAuth start
	I1207 23:11:56.645254  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658
	I1207 23:11:56.663482  487084 provision.go:143] copyHostCerts
	I1207 23:11:56.663535  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:11:56.663565  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:11:56.663574  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:11:56.663652  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:11:56.663767  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:11:56.663794  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:11:56.663802  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:11:56.663845  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:11:56.663928  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:11:56.663951  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:11:56.663961  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:11:56.663999  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:11:56.664154  487084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.ha-907658 san=[127.0.0.1 192.168.49.2 ha-907658 localhost minikube]
	I1207 23:11:56.859476  487084 provision.go:177] copyRemoteCerts
	I1207 23:11:56.859539  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:11:56.859583  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:56.877854  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:11:56.971727  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1207 23:11:56.971784  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1207 23:11:56.989675  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1207 23:11:56.989726  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 23:11:57.006645  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1207 23:11:57.006699  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:11:57.024214  487084 provision.go:87] duration metric: took 379.007514ms to configureAuth
	I1207 23:11:57.024242  487084 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:11:57.024505  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:11:57.024648  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:57.043106  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:11:57.043322  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1207 23:11:57.043362  487084 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:11:57.351275  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:11:57.351301  487084 machine.go:97] duration metric: took 4.158205159s to provisionDockerMachine
	I1207 23:11:57.351316  487084 start.go:293] postStartSetup for "ha-907658" (driver="docker")
	I1207 23:11:57.351345  487084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:11:57.351414  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:11:57.351463  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:57.370902  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:11:57.463959  487084 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:11:57.467550  487084 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:11:57.467577  487084 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:11:57.467590  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:11:57.467657  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:11:57.467762  487084 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:11:57.467778  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /etc/ssl/certs/3931252.pem
	I1207 23:11:57.467888  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:11:57.475351  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:11:57.492383  487084 start.go:296] duration metric: took 141.051455ms for postStartSetup
	I1207 23:11:57.492490  487084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:11:57.492538  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:57.510719  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:11:57.601727  487084 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:11:57.606180  487084 fix.go:56] duration metric: took 4.705212142s for fixHost
	I1207 23:11:57.606209  487084 start.go:83] releasing machines lock for "ha-907658", held for 4.705267868s
	I1207 23:11:57.606320  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658
	I1207 23:11:57.624104  487084 ssh_runner.go:195] Run: cat /version.json
	I1207 23:11:57.624182  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:57.624209  487084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:11:57.624294  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:57.642922  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:11:57.643662  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:11:57.785793  487084 ssh_runner.go:195] Run: systemctl --version
	I1207 23:11:57.792308  487084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:11:57.826743  487084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:11:57.831572  487084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:11:57.831644  487084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:11:57.839631  487084 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 23:11:57.839653  487084 start.go:496] detecting cgroup driver to use...
	I1207 23:11:57.839690  487084 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:11:57.839733  487084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:11:57.853650  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:11:57.866122  487084 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:11:57.866194  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:11:57.880612  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:11:57.893020  487084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:11:57.971718  487084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:11:58.051170  487084 docker.go:234] disabling docker service ...
	I1207 23:11:58.051240  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:11:58.065815  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:11:58.078071  487084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:11:58.159158  487084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:11:58.241617  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:11:58.253808  487084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:11:58.267810  487084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:11:58.267865  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.276619  487084 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:11:58.276694  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.285159  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.293362  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.301983  487084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:11:58.310270  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.319027  487084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.327563  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.336683  487084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:11:58.344663  487084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:11:58.352591  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:11:58.430723  487084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:11:58.561670  487084 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:11:58.561748  487084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:11:58.565839  487084 start.go:564] Will wait 60s for crictl version
	I1207 23:11:58.565925  487084 ssh_runner.go:195] Run: which crictl
	I1207 23:11:58.569353  487084 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:11:58.593853  487084 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:11:58.593949  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:11:58.621201  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:11:58.650380  487084 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1207 23:11:58.651543  487084 cli_runner.go:164] Run: docker network inspect ha-907658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:11:58.669539  487084 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1207 23:11:58.673718  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:11:58.684392  487084 kubeadm.go:884] updating cluster {Name:ha-907658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:11:58.684550  487084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:11:58.684610  487084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:11:58.716893  487084 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:11:58.716915  487084 crio.go:433] Images already preloaded, skipping extraction
	I1207 23:11:58.717012  487084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:11:58.743428  487084 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:11:58.743474  487084 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:11:58.743483  487084 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1207 23:11:58.743593  487084 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-907658 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:11:58.743655  487084 ssh_runner.go:195] Run: crio config
	I1207 23:11:58.789302  487084 cni.go:84] Creating CNI manager for ""
	I1207 23:11:58.789345  487084 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1207 23:11:58.789368  487084 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 23:11:58.789396  487084 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-907658 NodeName:ha-907658 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:11:58.789521  487084 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-907658"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:11:58.789548  487084 kube-vip.go:115] generating kube-vip config ...
	I1207 23:11:58.789589  487084 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1207 23:11:58.801884  487084 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:11:58.802014  487084 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1207 23:11:58.802092  487084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:11:58.809827  487084 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:11:58.809897  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1207 23:11:58.817290  487084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1207 23:11:58.829895  487084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:11:58.842148  487084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1207 23:11:58.854128  487084 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1207 23:11:58.866494  487084 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1207 23:11:58.870208  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:11:58.879832  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:11:58.957062  487084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:11:58.981696  487084 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658 for IP: 192.168.49.2
	I1207 23:11:58.981720  487084 certs.go:195] generating shared ca certs ...
	I1207 23:11:58.981747  487084 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:58.981923  487084 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:11:58.981976  487084 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:11:58.981990  487084 certs.go:257] generating profile certs ...
	I1207 23:11:58.982095  487084 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key
	I1207 23:11:58.982127  487084 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key.be52f8f7
	I1207 23:11:58.982147  487084 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt.be52f8f7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1207 23:11:59.053446  487084 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt.be52f8f7 ...
	I1207 23:11:59.053484  487084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt.be52f8f7: {Name:mkde9a77ed2ccf374bbd7ef2ab8471222e930ca7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:59.053683  487084 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key.be52f8f7 ...
	I1207 23:11:59.053700  487084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key.be52f8f7: {Name:mkf9f5e1f2966de715814128c39c83c05472c22e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:59.053837  487084 certs.go:382] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt.be52f8f7 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt
	I1207 23:11:59.054023  487084 certs.go:386] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key.be52f8f7 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key
	I1207 23:11:59.054208  487084 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key
	I1207 23:11:59.054223  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1207 23:11:59.054240  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1207 23:11:59.054254  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1207 23:11:59.054268  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1207 23:11:59.054285  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1207 23:11:59.054298  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1207 23:11:59.054315  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1207 23:11:59.054346  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1207 23:11:59.054449  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:11:59.054492  487084 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:11:59.054503  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:11:59.054539  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:11:59.054597  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:11:59.054627  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:11:59.054683  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:11:59.054723  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem -> /usr/share/ca-certificates/393125.pem
	I1207 23:11:59.054754  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /usr/share/ca-certificates/3931252.pem
	I1207 23:11:59.054767  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:11:59.055522  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:11:59.076096  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:11:59.092913  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:11:59.110126  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:11:59.126855  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1207 23:11:59.143407  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 23:11:59.160896  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:11:59.178517  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 23:11:59.196273  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:11:59.213156  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:11:59.230319  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:11:59.247989  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:11:59.259981  487084 ssh_runner.go:195] Run: openssl version
	I1207 23:11:59.265807  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:11:59.273185  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:11:59.280496  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:11:59.284023  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:11:59.284068  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:11:59.318047  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:11:59.325928  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:11:59.332951  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:11:59.340016  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:11:59.343716  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:11:59.343772  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:11:59.377866  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:11:59.386064  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:11:59.393852  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:11:59.401598  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:11:59.405548  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:11:59.405622  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:11:59.439621  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:11:59.447485  487084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:11:59.451341  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 23:11:59.493084  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 23:11:59.535906  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 23:11:59.583567  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 23:11:59.642172  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 23:11:59.681845  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 23:11:59.717892  487084 kubeadm.go:401] StartCluster: {Name:ha-907658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:11:59.718040  487084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:11:59.718122  487084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:11:59.750509  487084 cri.go:89] found id: "86601d9f6ba07c5cc957fcd84ee14c9ed14e0f86e2c332659c8fd9ca9c473cdd"
	I1207 23:11:59.750537  487084 cri.go:89] found id: "3102169518f14fb026edc01e1247ff4c2edc1292fb8d6ddab3310dc29262b65d"
	I1207 23:11:59.750543  487084 cri.go:89] found id: "87abab3f9975c7d1ffa51c90a94a832599db31aa8d9e2e4cdcccfa593c87020f"
	I1207 23:11:59.750548  487084 cri.go:89] found id: "db1d97b6874004dcfa1bfc301e8470ac6e8ab810f5002178c4d64e0899af2340"
	I1207 23:11:59.750560  487084 cri.go:89] found id: "04ab6dc0a72c2fd9ce998abf808c8139e9d16737d96e3dc5573726403cfba770"
	I1207 23:11:59.750567  487084 cri.go:89] found id: ""
	I1207 23:11:59.750620  487084 ssh_runner.go:195] Run: sudo runc list -f json
	W1207 23:11:59.763116  487084 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:11:59Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:11:59.763191  487084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:11:59.771453  487084 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1207 23:11:59.771471  487084 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1207 23:11:59.771524  487084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 23:11:59.778977  487084 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:11:59.779462  487084 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-907658" does not appear in /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:11:59.779590  487084 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-389542/kubeconfig needs updating (will repair): [kubeconfig missing "ha-907658" cluster setting kubeconfig missing "ha-907658" context setting]
	I1207 23:11:59.780044  487084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:59.780730  487084 kapi.go:59] client config for ha-907658: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key", CAFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 23:11:59.781268  487084 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1207 23:11:59.781286  487084 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1207 23:11:59.781293  487084 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1207 23:11:59.781300  487084 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1207 23:11:59.781318  487084 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1207 23:11:59.781314  487084 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1207 23:11:59.781841  487084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 23:11:59.790236  487084 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1207 23:11:59.790262  487084 kubeadm.go:602] duration metric: took 18.784379ms to restartPrimaryControlPlane
	I1207 23:11:59.790272  487084 kubeadm.go:403] duration metric: took 72.393488ms to StartCluster
	I1207 23:11:59.790292  487084 settings.go:142] acquiring lock: {Name:mk372e79badb9c8f25216fa891cff6dfa96ea2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:59.790408  487084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:11:59.791175  487084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:59.791433  487084 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:11:59.791463  487084 start.go:242] waiting for startup goroutines ...
	I1207 23:11:59.791480  487084 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1207 23:11:59.791743  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:11:59.794127  487084 out.go:179] * Enabled addons: 
	I1207 23:11:59.795136  487084 addons.go:530] duration metric: took 3.661252ms for enable addons: enabled=[]
	I1207 23:11:59.795167  487084 start.go:247] waiting for cluster config update ...
	I1207 23:11:59.795178  487084 start.go:256] writing updated cluster config ...
	I1207 23:11:59.796468  487084 out.go:203] 
	I1207 23:11:59.797620  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:11:59.797739  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:11:59.799011  487084 out.go:179] * Starting "ha-907658-m02" control-plane node in "ha-907658" cluster
	I1207 23:11:59.799852  487084 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:11:59.800858  487084 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:11:59.801718  487084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:11:59.801733  487084 cache.go:65] Caching tarball of preloaded images
	I1207 23:11:59.801784  487084 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:11:59.801821  487084 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:11:59.801834  487084 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1207 23:11:59.801944  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:11:59.823527  487084 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:11:59.823550  487084 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:11:59.823570  487084 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:11:59.823603  487084 start.go:360] acquireMachinesLock for ha-907658-m02: {Name:mk6484dd4dfe7ba137d5f583543a1831d27edba5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:11:59.823673  487084 start.go:364] duration metric: took 49.067µs to acquireMachinesLock for "ha-907658-m02"
	I1207 23:11:59.823696  487084 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:11:59.823702  487084 fix.go:54] fixHost starting: m02
	I1207 23:11:59.823927  487084 cli_runner.go:164] Run: docker container inspect ha-907658-m02 --format={{.State.Status}}
	I1207 23:11:59.844560  487084 fix.go:112] recreateIfNeeded on ha-907658-m02: state=Stopped err=<nil>
	W1207 23:11:59.844589  487084 fix.go:138] unexpected machine state, will restart: <nil>
	I1207 23:11:59.846377  487084 out.go:252] * Restarting existing docker container for "ha-907658-m02" ...
	I1207 23:11:59.846453  487084 cli_runner.go:164] Run: docker start ha-907658-m02
	I1207 23:12:00.130224  487084 cli_runner.go:164] Run: docker container inspect ha-907658-m02 --format={{.State.Status}}
	I1207 23:12:00.155491  487084 kic.go:430] container "ha-907658-m02" state is running.
	I1207 23:12:00.155911  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m02
	I1207 23:12:00.178281  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:12:00.178573  487084 machine.go:94] provisionDockerMachine start ...
	I1207 23:12:00.178649  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:00.198614  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:00.198945  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1207 23:12:00.198960  487084 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:12:00.199661  487084 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38884->127.0.0.1:33218: read: connection reset by peer
	I1207 23:12:03.333342  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658-m02
	
	I1207 23:12:03.333382  487084 ubuntu.go:182] provisioning hostname "ha-907658-m02"
	I1207 23:12:03.333446  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:03.352148  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:03.352463  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1207 23:12:03.352484  487084 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-907658-m02 && echo "ha-907658-m02" | sudo tee /etc/hostname
	I1207 23:12:03.505996  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658-m02
	
	I1207 23:12:03.506086  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:03.523096  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:03.523409  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1207 23:12:03.523430  487084 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-907658-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-907658-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-907658-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:12:03.654538  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:12:03.654571  487084 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:12:03.654593  487084 ubuntu.go:190] setting up certificates
	I1207 23:12:03.654607  487084 provision.go:84] configureAuth start
	I1207 23:12:03.654667  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m02
	I1207 23:12:03.678200  487084 provision.go:143] copyHostCerts
	I1207 23:12:03.678248  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:12:03.678285  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:12:03.678297  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:12:03.678397  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:12:03.678500  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:12:03.678535  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:12:03.678546  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:12:03.678587  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:12:03.678657  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:12:03.678682  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:12:03.678690  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:12:03.678715  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:12:03.678770  487084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.ha-907658-m02 san=[127.0.0.1 192.168.49.3 ha-907658-m02 localhost minikube]
	I1207 23:12:03.790264  487084 provision.go:177] copyRemoteCerts
	I1207 23:12:03.790352  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:12:03.790402  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:03.823101  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m02/id_rsa Username:docker}
	I1207 23:12:03.924465  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1207 23:12:03.924539  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:12:03.944485  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1207 23:12:03.944556  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1207 23:12:03.968961  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1207 23:12:03.969036  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 23:12:03.995367  487084 provision.go:87] duration metric: took 340.743667ms to configureAuth
	I1207 23:12:03.995400  487084 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:12:03.995657  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:03.995779  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:04.026533  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:04.026857  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1207 23:12:04.026885  487084 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:12:04.415911  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:12:04.415941  487084 machine.go:97] duration metric: took 4.237351611s to provisionDockerMachine
	I1207 23:12:04.415957  487084 start.go:293] postStartSetup for "ha-907658-m02" (driver="docker")
	I1207 23:12:04.415971  487084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:12:04.416028  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:12:04.416078  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:04.434685  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m02/id_rsa Username:docker}
	I1207 23:12:04.530207  487084 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:12:04.533967  487084 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:12:04.533999  487084 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:12:04.534014  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:12:04.534066  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:12:04.534139  487084 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:12:04.534149  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /etc/ssl/certs/3931252.pem
	I1207 23:12:04.534230  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:12:04.542117  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:12:04.560472  487084 start.go:296] duration metric: took 144.495639ms for postStartSetup
	I1207 23:12:04.560570  487084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:12:04.560625  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:04.577649  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m02/id_rsa Username:docker}
	I1207 23:12:04.669363  487084 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:12:04.674346  487084 fix.go:56] duration metric: took 4.85062394s for fixHost
	I1207 23:12:04.674372  487084 start.go:83] releasing machines lock for "ha-907658-m02", held for 4.850686194s
	I1207 23:12:04.674436  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m02
	I1207 23:12:04.693901  487084 out.go:179] * Found network options:
	I1207 23:12:04.695122  487084 out.go:179]   - NO_PROXY=192.168.49.2
	W1207 23:12:04.696299  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	W1207 23:12:04.696348  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	I1207 23:12:04.696432  487084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:12:04.696482  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:04.696491  487084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:12:04.696545  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:04.715832  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m02/id_rsa Username:docker}
	I1207 23:12:04.716229  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m02/id_rsa Username:docker}
	I1207 23:12:04.880414  487084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:12:04.885363  487084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:12:04.885437  487084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:12:04.893312  487084 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 23:12:04.893347  487084 start.go:496] detecting cgroup driver to use...
	I1207 23:12:04.893386  487084 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:12:04.893433  487084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:12:04.908112  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:12:04.920708  487084 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:12:04.920806  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:12:04.935538  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:12:04.948970  487084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:12:05.093803  487084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:12:05.237498  487084 docker.go:234] disabling docker service ...
	I1207 23:12:05.237578  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:12:05.255362  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:12:05.271477  487084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:12:05.401811  487084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:12:05.532521  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:12:05.547785  487084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:12:05.566033  487084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:12:05.566094  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.577067  487084 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:12:05.577126  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.589050  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.599566  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.609984  487084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:12:05.619430  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.632001  487084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.642199  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.652617  487084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:12:05.661297  487084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:12:05.671605  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:05.817088  487084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:12:06.027922  487084 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:12:06.027991  487084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:12:06.032083  487084 start.go:564] Will wait 60s for crictl version
	I1207 23:12:06.032144  487084 ssh_runner.go:195] Run: which crictl
	I1207 23:12:06.035913  487084 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:12:06.060174  487084 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:12:06.060268  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:12:06.088918  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:12:06.119010  487084 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1207 23:12:06.120321  487084 out.go:179]   - env NO_PROXY=192.168.49.2
	I1207 23:12:06.121801  487084 cli_runner.go:164] Run: docker network inspect ha-907658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:12:06.139719  487084 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1207 23:12:06.143993  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:12:06.155217  487084 mustload.go:66] Loading cluster: ha-907658
	I1207 23:12:06.155433  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:06.155653  487084 cli_runner.go:164] Run: docker container inspect ha-907658 --format={{.State.Status}}
	I1207 23:12:06.173920  487084 host.go:66] Checking if "ha-907658" exists ...
	I1207 23:12:06.174154  487084 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658 for IP: 192.168.49.3
	I1207 23:12:06.174165  487084 certs.go:195] generating shared ca certs ...
	I1207 23:12:06.174179  487084 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:12:06.174311  487084 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:12:06.174381  487084 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:12:06.174397  487084 certs.go:257] generating profile certs ...
	I1207 23:12:06.174493  487084 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key
	I1207 23:12:06.174583  487084 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key.39a0badd
	I1207 23:12:06.174639  487084 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key
	I1207 23:12:06.174654  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1207 23:12:06.174671  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1207 23:12:06.174693  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1207 23:12:06.174708  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1207 23:12:06.174722  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1207 23:12:06.174739  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1207 23:12:06.174753  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1207 23:12:06.174772  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1207 23:12:06.174836  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:12:06.174877  487084 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:12:06.174891  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:12:06.174926  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:12:06.174963  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:12:06.174996  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:12:06.175052  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:12:06.175095  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /usr/share/ca-certificates/3931252.pem
	I1207 23:12:06.175115  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:06.175131  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem -> /usr/share/ca-certificates/393125.pem
	I1207 23:12:06.175194  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:12:06.197420  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:12:06.283673  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1207 23:12:06.290449  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1207 23:12:06.302775  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1207 23:12:06.308469  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1207 23:12:06.317835  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1207 23:12:06.321609  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1207 23:12:06.330066  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1207 23:12:06.333816  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1207 23:12:06.345628  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1207 23:12:06.352380  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1207 23:12:06.360869  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1207 23:12:06.364787  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1207 23:12:06.374104  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:12:06.394705  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:12:06.413194  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:12:06.432115  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:12:06.449406  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1207 23:12:06.466917  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 23:12:06.498654  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:12:06.528737  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 23:12:06.546449  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:12:06.564005  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:12:06.582815  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:12:06.601666  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1207 23:12:06.615105  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1207 23:12:06.631379  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1207 23:12:06.646798  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1207 23:12:06.659864  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1207 23:12:06.675256  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1207 23:12:06.690795  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1207 23:12:06.705444  487084 ssh_runner.go:195] Run: openssl version
	I1207 23:12:06.712063  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:12:06.720029  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:12:06.728834  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:12:06.733304  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:12:06.733391  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:12:06.771128  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:12:06.779038  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:06.787058  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:12:06.794858  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:06.798600  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:06.798662  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:06.834714  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:12:06.842519  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:12:06.849816  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:12:06.857109  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:12:06.860827  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:12:06.860876  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:12:06.901264  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:12:06.909596  487084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:12:06.913535  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 23:12:06.953706  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 23:12:06.990023  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 23:12:07.024365  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 23:12:07.059478  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 23:12:07.093656  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 23:12:07.130433  487084 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1207 23:12:07.130566  487084 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-907658-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:12:07.130596  487084 kube-vip.go:115] generating kube-vip config ...
	I1207 23:12:07.130647  487084 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1207 23:12:07.142960  487084 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:12:07.143037  487084 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1207 23:12:07.143109  487084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:12:07.151538  487084 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:12:07.151608  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1207 23:12:07.159652  487084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1207 23:12:07.172062  487084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:12:07.184591  487084 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1207 23:12:07.197988  487084 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1207 23:12:07.201949  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:12:07.212295  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:07.335873  487084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:12:07.349280  487084 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:12:07.349636  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:07.351992  487084 out.go:179] * Verifying Kubernetes components...
	I1207 23:12:07.353164  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:07.482271  487084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:12:07.495426  487084 kapi.go:59] client config for ha-907658: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key", CAFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1207 23:12:07.495497  487084 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1207 23:12:07.495703  487084 node_ready.go:35] waiting up to 6m0s for node "ha-907658-m02" to be "Ready" ...
	I1207 23:12:07.504809  487084 node_ready.go:49] node "ha-907658-m02" is "Ready"
	I1207 23:12:07.504835  487084 node_ready.go:38] duration metric: took 9.118175ms for node "ha-907658-m02" to be "Ready" ...
	I1207 23:12:07.504849  487084 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:12:07.504891  487084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:12:07.517382  487084 api_server.go:72] duration metric: took 168.030727ms to wait for apiserver process to appear ...
	I1207 23:12:07.517409  487084 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:12:07.517436  487084 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1207 23:12:07.523117  487084 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1207 23:12:07.524187  487084 api_server.go:141] control plane version: v1.34.2
	I1207 23:12:07.524214  487084 api_server.go:131] duration metric: took 6.79771ms to wait for apiserver health ...
	I1207 23:12:07.524224  487084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:12:07.530960  487084 system_pods.go:59] 26 kube-system pods found
	I1207 23:12:07.531007  487084 system_pods.go:61] "coredns-66bc5c9577-7lkd8" [87d8dbef-c05d-4fcd-b08e-4ee6bce689ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:12:07.531030  487084 system_pods.go:61] "coredns-66bc5c9577-j9lqh" [50fb7869-af19-4fe4-a49d-bf8431faa47e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:12:07.531045  487084 system_pods.go:61] "etcd-ha-907658" [a1045f46-63e5-4adf-8cba-698626661685] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:12:07.531055  487084 system_pods.go:61] "etcd-ha-907658-m02" [e0fd4196-c559-4ed5-a866-f2edca5d028b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:12:07.531065  487084 system_pods.go:61] "etcd-ha-907658-m03" [ec660b37-46e0-4ea6-8143-43a215cb208e] Running
	I1207 23:12:07.531077  487084 system_pods.go:61] "kindnet-5lg58" [595946fb-4b57-4869-85e2-75debf3486ae] Running
	I1207 23:12:07.531082  487084 system_pods.go:61] "kindnet-9rqhs" [78003a20-15f9-43e0-8a11-9c215ade326b] Running
	I1207 23:12:07.531086  487084 system_pods.go:61] "kindnet-hzfvq" [8c0ef1d7-39de-46ce-b16b-4d2794e7dc20] Running
	I1207 23:12:07.531090  487084 system_pods.go:61] "kindnet-wvnmz" [464814b4-64d5-4cae-b298-44186fe9b844] Running
	I1207 23:12:07.531102  487084 system_pods.go:61] "kube-apiserver-ha-907658" [746157f2-b5d4-4a22-b0d0-e186dba5c022] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:12:07.531114  487084 system_pods.go:61] "kube-apiserver-ha-907658-m02" [69e1f1f9-cc80-4383-8bf2-cd362ab2fc9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:12:07.531122  487084 system_pods.go:61] "kube-apiserver-ha-907658-m03" [6dd58630-2169-4539-b8eb-d9971aef28c0] Running
	I1207 23:12:07.531128  487084 system_pods.go:61] "kube-controller-manager-ha-907658" [86717111-1edd-4e7d-bd64-87a0b751fd53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:12:07.531132  487084 system_pods.go:61] "kube-controller-manager-ha-907658-m02" [2edf59bb-e62d-4897-9d2f-6a454cc72644] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:12:07.531138  487084 system_pods.go:61] "kube-controller-manager-ha-907658-m03" [87b33e73-dedd-477d-87fa-42e198df84ba] Running
	I1207 23:12:07.531141  487084 system_pods.go:61] "kube-proxy-8fwsf" [1d7267ee-074b-40da-bfe0-4b434d732d8c] Running
	I1207 23:12:07.531147  487084 system_pods.go:61] "kube-proxy-b8vz9" [cd4b68a6-4528-4644-bac6-158d1bffd0ed] Running
	I1207 23:12:07.531150  487084 system_pods.go:61] "kube-proxy-r5c77" [c0ba957f-b2b5-4e7a-b93a-b3619c1e4cf9] Running
	I1207 23:12:07.531153  487084 system_pods.go:61] "kube-proxy-sdhd8" [55e62bf1-af57-4c34-925a-c44c47ce32ce] Running
	I1207 23:12:07.531157  487084 system_pods.go:61] "kube-scheduler-ha-907658" [16a4e936-d293-4107-b559-200f764f7dd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:12:07.531164  487084 system_pods.go:61] "kube-scheduler-ha-907658-m02" [85e3e5a5-fe1f-4994-90d4-c4e42a5a887f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:12:07.531175  487084 system_pods.go:61] "kube-scheduler-ha-907658-m03" [ca765146-fd0b-4cc8-9f6e-55e2601a5033] Running
	I1207 23:12:07.531178  487084 system_pods.go:61] "kube-vip-ha-907658" [2fc8fc0b-3f23-44d1-909a-20f06169c8dd] Running
	I1207 23:12:07.531181  487084 system_pods.go:61] "kube-vip-ha-907658-m02" [53a8762d-c686-486f-9814-2f40e4ff3306] Running
	I1207 23:12:07.531184  487084 system_pods.go:61] "kube-vip-ha-907658-m03" [6bc4a730-7a65-43a8-a746-2bc3ffa9ccc8] Running
	I1207 23:12:07.531186  487084 system_pods.go:61] "storage-provisioner" [5e80f8de-afe9-4c94-997c-c06f5ff985db] Running
	I1207 23:12:07.531192  487084 system_pods.go:74] duration metric: took 6.96154ms to wait for pod list to return data ...
	I1207 23:12:07.531202  487084 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:12:07.533477  487084 default_sa.go:45] found service account: "default"
	I1207 23:12:07.533501  487084 default_sa.go:55] duration metric: took 2.292892ms for default service account to be created ...
	I1207 23:12:07.533508  487084 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 23:12:07.539025  487084 system_pods.go:86] 26 kube-system pods found
	I1207 23:12:07.539051  487084 system_pods.go:89] "coredns-66bc5c9577-7lkd8" [87d8dbef-c05d-4fcd-b08e-4ee6bce689ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:12:07.539059  487084 system_pods.go:89] "coredns-66bc5c9577-j9lqh" [50fb7869-af19-4fe4-a49d-bf8431faa47e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:12:07.539067  487084 system_pods.go:89] "etcd-ha-907658" [a1045f46-63e5-4adf-8cba-698626661685] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:12:07.539072  487084 system_pods.go:89] "etcd-ha-907658-m02" [e0fd4196-c559-4ed5-a866-f2edca5d028b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:12:07.539076  487084 system_pods.go:89] "etcd-ha-907658-m03" [ec660b37-46e0-4ea6-8143-43a215cb208e] Running
	I1207 23:12:07.539080  487084 system_pods.go:89] "kindnet-5lg58" [595946fb-4b57-4869-85e2-75debf3486ae] Running
	I1207 23:12:07.539083  487084 system_pods.go:89] "kindnet-9rqhs" [78003a20-15f9-43e0-8a11-9c215ade326b] Running
	I1207 23:12:07.539087  487084 system_pods.go:89] "kindnet-hzfvq" [8c0ef1d7-39de-46ce-b16b-4d2794e7dc20] Running
	I1207 23:12:07.539090  487084 system_pods.go:89] "kindnet-wvnmz" [464814b4-64d5-4cae-b298-44186fe9b844] Running
	I1207 23:12:07.539097  487084 system_pods.go:89] "kube-apiserver-ha-907658" [746157f2-b5d4-4a22-b0d0-e186dba5c022] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:12:07.539105  487084 system_pods.go:89] "kube-apiserver-ha-907658-m02" [69e1f1f9-cc80-4383-8bf2-cd362ab2fc9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:12:07.539109  487084 system_pods.go:89] "kube-apiserver-ha-907658-m03" [6dd58630-2169-4539-b8eb-d9971aef28c0] Running
	I1207 23:12:07.539118  487084 system_pods.go:89] "kube-controller-manager-ha-907658" [86717111-1edd-4e7d-bd64-87a0b751fd53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:12:07.539123  487084 system_pods.go:89] "kube-controller-manager-ha-907658-m02" [2edf59bb-e62d-4897-9d2f-6a454cc72644] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:12:07.539127  487084 system_pods.go:89] "kube-controller-manager-ha-907658-m03" [87b33e73-dedd-477d-87fa-42e198df84ba] Running
	I1207 23:12:07.539130  487084 system_pods.go:89] "kube-proxy-8fwsf" [1d7267ee-074b-40da-bfe0-4b434d732d8c] Running
	I1207 23:12:07.539139  487084 system_pods.go:89] "kube-proxy-b8vz9" [cd4b68a6-4528-4644-bac6-158d1bffd0ed] Running
	I1207 23:12:07.539144  487084 system_pods.go:89] "kube-proxy-r5c77" [c0ba957f-b2b5-4e7a-b93a-b3619c1e4cf9] Running
	I1207 23:12:07.539153  487084 system_pods.go:89] "kube-proxy-sdhd8" [55e62bf1-af57-4c34-925a-c44c47ce32ce] Running
	I1207 23:12:07.539159  487084 system_pods.go:89] "kube-scheduler-ha-907658" [16a4e936-d293-4107-b559-200f764f7dd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:12:07.539164  487084 system_pods.go:89] "kube-scheduler-ha-907658-m02" [85e3e5a5-fe1f-4994-90d4-c4e42a5a887f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:12:07.539167  487084 system_pods.go:89] "kube-scheduler-ha-907658-m03" [ca765146-fd0b-4cc8-9f6e-55e2601a5033] Running
	I1207 23:12:07.539171  487084 system_pods.go:89] "kube-vip-ha-907658" [2fc8fc0b-3f23-44d1-909a-20f06169c8dd] Running
	I1207 23:12:07.539174  487084 system_pods.go:89] "kube-vip-ha-907658-m02" [53a8762d-c686-486f-9814-2f40e4ff3306] Running
	I1207 23:12:07.539176  487084 system_pods.go:89] "kube-vip-ha-907658-m03" [6bc4a730-7a65-43a8-a746-2bc3ffa9ccc8] Running
	I1207 23:12:07.539181  487084 system_pods.go:89] "storage-provisioner" [5e80f8de-afe9-4c94-997c-c06f5ff985db] Running
	I1207 23:12:07.539191  487084 system_pods.go:126] duration metric: took 5.677775ms to wait for k8s-apps to be running ...
	I1207 23:12:07.539200  487084 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:12:07.539244  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:12:07.552415  487084 system_svc.go:56] duration metric: took 13.204195ms WaitForService to wait for kubelet
	I1207 23:12:07.552445  487084 kubeadm.go:587] duration metric: took 203.099861ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:12:07.552461  487084 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:12:07.556717  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:07.556763  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:07.556789  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:07.556794  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:07.556800  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:07.556804  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:07.556815  487084 node_conditions.go:105] duration metric: took 4.343663ms to run NodePressure ...
	I1207 23:12:07.556830  487084 start.go:242] waiting for startup goroutines ...
	I1207 23:12:07.556864  487084 start.go:256] writing updated cluster config ...
	I1207 23:12:07.559024  487084 out.go:203] 
	I1207 23:12:07.560420  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:07.560527  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:12:07.562073  487084 out.go:179] * Starting "ha-907658-m04" worker node in "ha-907658" cluster
	I1207 23:12:07.563315  487084 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:12:07.564547  487084 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:12:07.565586  487084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:12:07.565600  487084 cache.go:65] Caching tarball of preloaded images
	I1207 23:12:07.565653  487084 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:12:07.565684  487084 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:12:07.565695  487084 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1207 23:12:07.565787  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:12:07.585455  487084 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:12:07.585473  487084 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:12:07.585488  487084 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:12:07.585525  487084 start.go:360] acquireMachinesLock for ha-907658-m04: {Name:mkbf928fa5c7c7d65c3e97ec1b1d2c403a4aafbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:12:07.585593  487084 start.go:364] duration metric: took 46.24µs to acquireMachinesLock for "ha-907658-m04"
	I1207 23:12:07.585618  487084 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:12:07.585630  487084 fix.go:54] fixHost starting: m04
	I1207 23:12:07.585905  487084 cli_runner.go:164] Run: docker container inspect ha-907658-m04 --format={{.State.Status}}
	I1207 23:12:07.603987  487084 fix.go:112] recreateIfNeeded on ha-907658-m04: state=Stopped err=<nil>
	W1207 23:12:07.604014  487084 fix.go:138] unexpected machine state, will restart: <nil>
	I1207 23:12:07.605765  487084 out.go:252] * Restarting existing docker container for "ha-907658-m04" ...
	I1207 23:12:07.605839  487084 cli_runner.go:164] Run: docker start ha-907658-m04
	I1207 23:12:07.853178  487084 cli_runner.go:164] Run: docker container inspect ha-907658-m04 --format={{.State.Status}}
	I1207 23:12:07.874755  487084 kic.go:430] container "ha-907658-m04" state is running.
	I1207 23:12:07.875212  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m04
	I1207 23:12:07.896653  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:12:07.897024  487084 machine.go:94] provisionDockerMachine start ...
	I1207 23:12:07.897151  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:07.918923  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:07.919195  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1207 23:12:07.919216  487084 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:12:07.919824  487084 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49894->127.0.0.1:33223: read: connection reset by peer
	I1207 23:12:11.048469  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658-m04
	
	I1207 23:12:11.048499  487084 ubuntu.go:182] provisioning hostname "ha-907658-m04"
	I1207 23:12:11.048563  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:11.066447  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:11.066738  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1207 23:12:11.066753  487084 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-907658-m04 && echo "ha-907658-m04" | sudo tee /etc/hostname
	I1207 23:12:11.206276  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658-m04
	
	I1207 23:12:11.206388  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:11.225667  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:11.225909  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1207 23:12:11.225925  487084 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-907658-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-907658-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-907658-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:12:11.355703  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:12:11.355747  487084 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:12:11.355789  487084 ubuntu.go:190] setting up certificates
	I1207 23:12:11.355803  487084 provision.go:84] configureAuth start
	I1207 23:12:11.355885  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m04
	I1207 23:12:11.374837  487084 provision.go:143] copyHostCerts
	I1207 23:12:11.374879  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:12:11.374918  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:12:11.374932  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:12:11.375021  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:12:11.375125  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:12:11.375155  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:12:11.375165  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:12:11.375205  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:12:11.375256  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:12:11.375278  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:12:11.375284  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:12:11.375321  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:12:11.375435  487084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.ha-907658-m04 san=[127.0.0.1 192.168.49.5 ha-907658-m04 localhost minikube]
	I1207 23:12:11.430934  487084 provision.go:177] copyRemoteCerts
	I1207 23:12:11.431006  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:12:11.431063  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:11.449187  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m04/id_rsa Username:docker}
	I1207 23:12:11.543515  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1207 23:12:11.543582  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1207 23:12:11.562188  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1207 23:12:11.562249  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 23:12:11.579970  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1207 23:12:11.580024  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:12:11.597607  487084 provision.go:87] duration metric: took 241.785948ms to configureAuth
	I1207 23:12:11.597642  487084 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:12:11.597863  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:11.597964  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:11.616041  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:11.616267  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1207 23:12:11.616282  487084 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:12:11.900554  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:12:11.900587  487084 machine.go:97] duration metric: took 4.00354246s to provisionDockerMachine
	I1207 23:12:11.900600  487084 start.go:293] postStartSetup for "ha-907658-m04" (driver="docker")
	I1207 23:12:11.900611  487084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:12:11.900667  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:12:11.900705  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:11.919920  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m04/id_rsa Username:docker}
	I1207 23:12:12.015993  487084 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:12:12.019664  487084 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:12:12.019701  487084 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:12:12.019713  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:12:12.019773  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:12:12.019880  487084 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:12:12.019892  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /etc/ssl/certs/3931252.pem
	I1207 23:12:12.020003  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:12:12.028252  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:12:12.045963  487084 start.go:296] duration metric: took 145.345162ms for postStartSetup
	I1207 23:12:12.046054  487084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:12:12.046100  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:12.064419  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m04/id_rsa Username:docker}
	I1207 23:12:12.155615  487084 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:12:12.160279  487084 fix.go:56] duration metric: took 4.57464273s for fixHost
	I1207 23:12:12.160305  487084 start.go:83] releasing machines lock for "ha-907658-m04", held for 4.574698172s
	I1207 23:12:12.160388  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m04
	I1207 23:12:12.180857  487084 out.go:179] * Found network options:
	I1207 23:12:12.182145  487084 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1207 23:12:12.183173  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	W1207 23:12:12.183195  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	W1207 23:12:12.183220  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	W1207 23:12:12.183237  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	I1207 23:12:12.183304  487084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:12:12.183368  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:12.183387  487084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:12:12.183450  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:12.203407  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m04/id_rsa Username:docker}
	I1207 23:12:12.203844  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m04/id_rsa Username:docker}
	I1207 23:12:12.357625  487084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:12:12.362541  487084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:12:12.362619  487084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:12:12.370757  487084 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 23:12:12.370785  487084 start.go:496] detecting cgroup driver to use...
	I1207 23:12:12.370818  487084 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:12:12.370864  487084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:12:12.385478  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:12:12.398446  487084 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:12:12.398518  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:12:12.413312  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:12:12.425964  487084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:12:12.508240  487084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:12:12.594377  487084 docker.go:234] disabling docker service ...
	I1207 23:12:12.594469  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:12:12.609287  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:12:12.621518  487084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:12:12.706445  487084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:12:12.788828  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:12:12.801567  487084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:12:12.815799  487084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:12:12.815866  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.824631  487084 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:12:12.824701  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.834415  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.843435  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.852233  487084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:12:12.861003  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.870357  487084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.879159  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.888283  487084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:12:12.896022  487084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:12:12.903097  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:12.988157  487084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:12:13.133593  487084 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:12:13.133671  487084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:12:13.137843  487084 start.go:564] Will wait 60s for crictl version
	I1207 23:12:13.137917  487084 ssh_runner.go:195] Run: which crictl
	I1207 23:12:13.141433  487084 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:12:13.167512  487084 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:12:13.167597  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:12:13.199036  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:12:13.229455  487084 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1207 23:12:13.230791  487084 out.go:179]   - env NO_PROXY=192.168.49.2
	I1207 23:12:13.232057  487084 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1207 23:12:13.233540  487084 cli_runner.go:164] Run: docker network inspect ha-907658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:12:13.250726  487084 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1207 23:12:13.254740  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:12:13.265197  487084 mustload.go:66] Loading cluster: ha-907658
	I1207 23:12:13.265455  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:13.265697  487084 cli_runner.go:164] Run: docker container inspect ha-907658 --format={{.State.Status}}
	I1207 23:12:13.284748  487084 host.go:66] Checking if "ha-907658" exists ...
	I1207 23:12:13.285028  487084 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658 for IP: 192.168.49.5
	I1207 23:12:13.285041  487084 certs.go:195] generating shared ca certs ...
	I1207 23:12:13.285056  487084 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:12:13.285200  487084 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:12:13.285261  487084 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:12:13.285280  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1207 23:12:13.285300  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1207 23:12:13.285317  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1207 23:12:13.285349  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1207 23:12:13.285417  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:12:13.285460  487084 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:12:13.285474  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:12:13.285512  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:12:13.285554  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:12:13.285592  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:12:13.285658  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:12:13.285698  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:13.285722  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem -> /usr/share/ca-certificates/393125.pem
	I1207 23:12:13.285741  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /usr/share/ca-certificates/3931252.pem
	I1207 23:12:13.285769  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:12:13.304120  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:12:13.322222  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:12:13.340050  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:12:13.357784  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:12:13.376383  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:12:13.395635  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:12:13.413473  487084 ssh_runner.go:195] Run: openssl version
	I1207 23:12:13.419754  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:13.427021  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:12:13.434993  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:13.439202  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:13.439267  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:13.473339  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:12:13.481399  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:12:13.488584  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:12:13.495734  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:12:13.499349  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:12:13.499394  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:12:13.534119  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:12:13.542358  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:12:13.550110  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:12:13.557923  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:12:13.561771  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:12:13.561821  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:12:13.600731  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:12:13.608915  487084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:12:13.612836  487084 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1207 23:12:13.612892  487084 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.2  false true} ...
	I1207 23:12:13.613000  487084 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-907658-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:12:13.613071  487084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:12:13.620905  487084 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:12:13.620964  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1207 23:12:13.628840  487084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1207 23:12:13.642519  487084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:12:13.655821  487084 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1207 23:12:13.660403  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:12:13.672258  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:13.756400  487084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:12:13.769720  487084 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1207 23:12:13.770008  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:13.772651  487084 out.go:179] * Verifying Kubernetes components...
	I1207 23:12:13.773857  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:13.857293  487084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:12:13.870886  487084 kapi.go:59] client config for ha-907658: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key", CAFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1207 23:12:13.870958  487084 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1207 23:12:13.871160  487084 node_ready.go:35] waiting up to 6m0s for node "ha-907658-m04" to be "Ready" ...
	I1207 23:12:13.874196  487084 node_ready.go:49] node "ha-907658-m04" is "Ready"
	I1207 23:12:13.874220  487084 node_ready.go:38] duration metric: took 3.046821ms for node "ha-907658-m04" to be "Ready" ...
	I1207 23:12:13.874233  487084 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:12:13.874273  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:12:13.886840  487084 system_svc.go:56] duration metric: took 12.598168ms WaitForService to wait for kubelet
	I1207 23:12:13.886868  487084 kubeadm.go:587] duration metric: took 117.090427ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:12:13.886885  487084 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:12:13.890337  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:13.890362  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:13.890375  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:13.890380  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:13.890386  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:13.890392  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:13.890400  487084 node_conditions.go:105] duration metric: took 3.509832ms to run NodePressure ...
	I1207 23:12:13.890416  487084 start.go:242] waiting for startup goroutines ...
	I1207 23:12:13.890446  487084 start.go:256] writing updated cluster config ...
	I1207 23:12:13.890792  487084 ssh_runner.go:195] Run: rm -f paused
	I1207 23:12:13.894562  487084 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:12:13.895171  487084 kapi.go:59] client config for ha-907658: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key", CAFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 23:12:13.903646  487084 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7lkd8" in "kube-system" namespace to be "Ready" or be gone ...
	W1207 23:12:15.910233  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:17.910533  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:20.410624  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:22.909833  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:25.410696  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:27.909729  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:29.911016  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:32.410597  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:34.410833  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:36.909456  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:38.911942  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:41.410807  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:43.910363  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:46.411526  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:48.911050  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:51.412217  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:53.910759  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:56.410211  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:58.410607  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:00.411373  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:02.910918  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:05.409687  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:07.409957  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:09.910681  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:12.410492  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:14.410764  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:16.909949  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:18.910470  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:20.911090  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:23.410279  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:25.910548  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:27.910666  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:30.410084  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:32.410161  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:34.411051  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:36.910027  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:39.410570  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:41.909517  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:43.910651  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:46.409768  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:48.410760  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:50.910511  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:52.910970  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:55.410193  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:57.410684  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:59.911085  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:01.911298  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:04.410828  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:06.910004  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:08.910803  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:11.410260  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:13.410549  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:15.911180  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:18.410236  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:20.910248  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:23.410312  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:25.909481  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:27.910308  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:29.910475  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:32.410112  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:34.910739  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:37.410174  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:39.410772  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:41.910812  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:44.409997  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:46.410369  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:48.910126  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:50.910698  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:53.410089  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:55.410604  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:57.910049  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:59.910503  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:02.409755  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:04.909540  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:06.910504  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:09.409997  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:11.411142  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:13.910274  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:16.410995  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:18.909895  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:20.909974  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:22.910657  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:25.410074  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:27.410196  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:29.410456  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:31.910828  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:34.410231  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:36.410432  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:38.909644  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:40.910092  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:42.910856  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:45.409802  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:47.410082  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:49.410149  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:51.910490  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:54.409927  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:56.410532  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:58.909671  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:00.910288  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:02.910545  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:05.410175  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:07.909887  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:09.910041  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:11.910457  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	I1207 23:16:13.895206  487084 pod_ready.go:86] duration metric: took 3m59.991503796s for pod "coredns-66bc5c9577-7lkd8" in "kube-system" namespace to be "Ready" or be gone ...
	W1207 23:16:13.895245  487084 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-dns" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1207 23:16:13.895263  487084 pod_ready.go:40] duration metric: took 4m0.000670566s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:16:13.897256  487084 out.go:203] 
	W1207 23:16:13.898559  487084 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1207 23:16:13.899846  487084 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-amd64 -p ha-907658 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-907658
helpers_test.go:243: (dbg) docker inspect ha-907658:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b18b557fea95c806a3bf174d1482bc2a7fdb2737b9fcb5b0eeea6e687f5d8adf",
	        "Created": "2025-12-07T23:06:25.641182516Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 487285,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:11:52.946976582Z",
	            "FinishedAt": "2025-12-07T23:11:52.180976562Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/b18b557fea95c806a3bf174d1482bc2a7fdb2737b9fcb5b0eeea6e687f5d8adf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b18b557fea95c806a3bf174d1482bc2a7fdb2737b9fcb5b0eeea6e687f5d8adf/hostname",
	        "HostsPath": "/var/lib/docker/containers/b18b557fea95c806a3bf174d1482bc2a7fdb2737b9fcb5b0eeea6e687f5d8adf/hosts",
	        "LogPath": "/var/lib/docker/containers/b18b557fea95c806a3bf174d1482bc2a7fdb2737b9fcb5b0eeea6e687f5d8adf/b18b557fea95c806a3bf174d1482bc2a7fdb2737b9fcb5b0eeea6e687f5d8adf-json.log",
	        "Name": "/ha-907658",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-907658:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-907658",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b18b557fea95c806a3bf174d1482bc2a7fdb2737b9fcb5b0eeea6e687f5d8adf",
	                "LowerDir": "/var/lib/docker/overlay2/95f4d37acd9603eb9082e08eb2b25d1d911e5a215fb4e71b00c8c77b90dafbc3-init/diff:/var/lib/docker/overlay2/d2e9c5481c0f5ed3745e4b3c85b207e8e3f273f5a1d285f7bc7bfa20976ad16e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/95f4d37acd9603eb9082e08eb2b25d1d911e5a215fb4e71b00c8c77b90dafbc3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/95f4d37acd9603eb9082e08eb2b25d1d911e5a215fb4e71b00c8c77b90dafbc3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/95f4d37acd9603eb9082e08eb2b25d1d911e5a215fb4e71b00c8c77b90dafbc3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-907658",
	                "Source": "/var/lib/docker/volumes/ha-907658/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-907658",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-907658",
	                "name.minikube.sigs.k8s.io": "ha-907658",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ba5e035333284e7ec191aa45f8e8f710a1211614ee9390e57a685e532fd2b7d0",
	            "SandboxKey": "/var/run/docker/netns/ba5e03533328",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33213"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33214"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33217"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33215"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33216"
	                    }
	                ]
	            },
	            "Networks": {
	                "ha-907658": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "918c8f4f6e86f6f20607e87a6beb39a8a1d64cc9183e3317d1968551e79c40e2",
	                    "EndpointID": "39156e34f46c5c2dd2e2dd90a72a9e93d4aca46c4dae46d6dd8bcd5fd820e723",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "d2:5b:58:4b:cd:fa",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-907658",
	                        "b18b557fea95"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-907658 -n ha-907658
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-907658 logs -n 25: (1.022204011s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-907658 cp ha-907658-m03:/home/docker/cp-test.txt ha-907658-m04:/home/docker/cp-test_ha-907658-m03_ha-907658-m04.txt               │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m04 sudo cat /home/docker/cp-test_ha-907658-m03_ha-907658-m04.txt                                         │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ cp      │ ha-907658 cp testdata/cp-test.txt ha-907658-m04:/home/docker/cp-test.txt                                                             │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ cp      │ ha-907658 cp ha-907658-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2786965912/001/cp-test_ha-907658-m04.txt │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ cp      │ ha-907658 cp ha-907658-m04:/home/docker/cp-test.txt ha-907658:/home/docker/cp-test_ha-907658-m04_ha-907658.txt                       │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658 sudo cat /home/docker/cp-test_ha-907658-m04_ha-907658.txt                                                 │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ cp      │ ha-907658 cp ha-907658-m04:/home/docker/cp-test.txt ha-907658-m02:/home/docker/cp-test_ha-907658-m04_ha-907658-m02.txt               │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m02 sudo cat /home/docker/cp-test_ha-907658-m04_ha-907658-m02.txt                                         │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ cp      │ ha-907658 cp ha-907658-m04:/home/docker/cp-test.txt ha-907658-m03:/home/docker/cp-test_ha-907658-m04_ha-907658-m03.txt               │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m03 sudo cat /home/docker/cp-test_ha-907658-m04_ha-907658-m03.txt                                         │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ node    │ ha-907658 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ node    │ ha-907658 node start m02 --alsologtostderr -v 5                                                                                      │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ node    │ ha-907658 node list --alsologtostderr -v 5                                                                                           │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │                     │
	│ stop    │ ha-907658 stop --alsologtostderr -v 5                                                                                                │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:10 UTC │
	│ start   │ ha-907658 start --wait true --alsologtostderr -v 5                                                                                   │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:11 UTC │
	│ node    │ ha-907658 node list --alsologtostderr -v 5                                                                                           │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:11 UTC │                     │
	│ node    │ ha-907658 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:11 UTC │ 07 Dec 25 23:11 UTC │
	│ stop    │ ha-907658 stop --alsologtostderr -v 5                                                                                                │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:11 UTC │ 07 Dec 25 23:11 UTC │
	│ start   │ ha-907658 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:11 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:11:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:11:52.723208  487084 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:11:52.723342  487084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:11:52.723354  487084 out.go:374] Setting ErrFile to fd 2...
	I1207 23:11:52.723361  487084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:11:52.723559  487084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:11:52.724064  487084 out.go:368] Setting JSON to false
	I1207 23:11:52.725035  487084 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6857,"bootTime":1765142256,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:11:52.725102  487084 start.go:143] virtualization: kvm guest
	I1207 23:11:52.726965  487084 out.go:179] * [ha-907658] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:11:52.728170  487084 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:11:52.728167  487084 notify.go:221] Checking for updates...
	I1207 23:11:52.730209  487084 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:11:52.731286  487084 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:11:52.732435  487084 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:11:52.733509  487084 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:11:52.734621  487084 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:11:52.736265  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:11:52.736931  487084 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:11:52.761948  487084 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:11:52.762088  487084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:11:52.815796  487084 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-07 23:11:52.805859782 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:11:52.815895  487084 docker.go:319] overlay module found
	I1207 23:11:52.818644  487084 out.go:179] * Using the docker driver based on existing profile
	I1207 23:11:52.819812  487084 start.go:309] selected driver: docker
	I1207 23:11:52.819828  487084 start.go:927] validating driver "docker" against &{Name:ha-907658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:11:52.819961  487084 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:11:52.820059  487084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:11:52.873900  487084 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-07 23:11:52.864641727 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:11:52.874579  487084 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:11:52.874614  487084 cni.go:84] Creating CNI manager for ""
	I1207 23:11:52.874670  487084 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1207 23:11:52.874722  487084 start.go:353] cluster config:
	{Name:ha-907658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:11:52.876967  487084 out.go:179] * Starting "ha-907658" primary control-plane node in "ha-907658" cluster
	I1207 23:11:52.877923  487084 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:11:52.878975  487084 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:11:52.880201  487084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:11:52.880231  487084 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1207 23:11:52.880239  487084 cache.go:65] Caching tarball of preloaded images
	I1207 23:11:52.880300  487084 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:11:52.880362  487084 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:11:52.880377  487084 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1207 23:11:52.880537  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:11:52.900771  487084 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:11:52.900792  487084 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:11:52.900810  487084 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:11:52.900849  487084 start.go:360] acquireMachinesLock for ha-907658: {Name:mkd7016770bc40ef9cd544023d232b92bc7cf832 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:11:52.900927  487084 start.go:364] duration metric: took 42.672µs to acquireMachinesLock for "ha-907658"
	I1207 23:11:52.900952  487084 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:11:52.900961  487084 fix.go:54] fixHost starting: 
	I1207 23:11:52.901168  487084 cli_runner.go:164] Run: docker container inspect ha-907658 --format={{.State.Status}}
	I1207 23:11:52.918459  487084 fix.go:112] recreateIfNeeded on ha-907658: state=Stopped err=<nil>
	W1207 23:11:52.918485  487084 fix.go:138] unexpected machine state, will restart: <nil>
	I1207 23:11:52.920300  487084 out.go:252] * Restarting existing docker container for "ha-907658" ...
	I1207 23:11:52.920381  487084 cli_runner.go:164] Run: docker start ha-907658
	I1207 23:11:53.154762  487084 cli_runner.go:164] Run: docker container inspect ha-907658 --format={{.State.Status}}
	I1207 23:11:53.172884  487084 kic.go:430] container "ha-907658" state is running.
	I1207 23:11:53.173368  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658
	I1207 23:11:53.192850  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:11:53.193082  487084 machine.go:94] provisionDockerMachine start ...
	I1207 23:11:53.193169  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:53.211683  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:11:53.211988  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1207 23:11:53.212008  487084 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:11:53.212567  487084 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40796->127.0.0.1:33213: read: connection reset by peer
	I1207 23:11:56.342986  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658
	
	I1207 23:11:56.343016  487084 ubuntu.go:182] provisioning hostname "ha-907658"
	I1207 23:11:56.343087  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:56.361678  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:11:56.361914  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1207 23:11:56.361928  487084 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-907658 && echo "ha-907658" | sudo tee /etc/hostname
	I1207 23:11:56.498208  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658
	
	I1207 23:11:56.498287  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:56.517144  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:11:56.517409  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1207 23:11:56.517428  487084 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-907658' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-907658/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-907658' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:11:56.645103  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:11:56.645138  487084 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:11:56.645173  487084 ubuntu.go:190] setting up certificates
	I1207 23:11:56.645187  487084 provision.go:84] configureAuth start
	I1207 23:11:56.645254  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658
	I1207 23:11:56.663482  487084 provision.go:143] copyHostCerts
	I1207 23:11:56.663535  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:11:56.663565  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:11:56.663574  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:11:56.663652  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:11:56.663767  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:11:56.663794  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:11:56.663802  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:11:56.663845  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:11:56.663928  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:11:56.663951  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:11:56.663961  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:11:56.663999  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:11:56.664154  487084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.ha-907658 san=[127.0.0.1 192.168.49.2 ha-907658 localhost minikube]
	I1207 23:11:56.859476  487084 provision.go:177] copyRemoteCerts
	I1207 23:11:56.859539  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:11:56.859583  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:56.877854  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:11:56.971727  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1207 23:11:56.971784  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1207 23:11:56.989675  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1207 23:11:56.989726  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 23:11:57.006645  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1207 23:11:57.006699  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:11:57.024214  487084 provision.go:87] duration metric: took 379.007514ms to configureAuth
	I1207 23:11:57.024242  487084 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:11:57.024505  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:11:57.024648  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:57.043106  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:11:57.043322  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1207 23:11:57.043362  487084 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:11:57.351275  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:11:57.351301  487084 machine.go:97] duration metric: took 4.158205159s to provisionDockerMachine
	I1207 23:11:57.351316  487084 start.go:293] postStartSetup for "ha-907658" (driver="docker")
	I1207 23:11:57.351345  487084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:11:57.351414  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:11:57.351463  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:57.370902  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:11:57.463959  487084 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:11:57.467550  487084 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:11:57.467577  487084 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:11:57.467590  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:11:57.467657  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:11:57.467762  487084 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:11:57.467778  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /etc/ssl/certs/3931252.pem
	I1207 23:11:57.467888  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:11:57.475351  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:11:57.492383  487084 start.go:296] duration metric: took 141.051455ms for postStartSetup
	I1207 23:11:57.492490  487084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:11:57.492538  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:57.510719  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:11:57.601727  487084 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:11:57.606180  487084 fix.go:56] duration metric: took 4.705212142s for fixHost
	I1207 23:11:57.606209  487084 start.go:83] releasing machines lock for "ha-907658", held for 4.705267868s
	I1207 23:11:57.606320  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658
	I1207 23:11:57.624104  487084 ssh_runner.go:195] Run: cat /version.json
	I1207 23:11:57.624182  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:57.624209  487084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:11:57.624294  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:57.642922  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:11:57.643662  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:11:57.785793  487084 ssh_runner.go:195] Run: systemctl --version
	I1207 23:11:57.792308  487084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:11:57.826743  487084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:11:57.831572  487084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:11:57.831644  487084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:11:57.839631  487084 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 23:11:57.839653  487084 start.go:496] detecting cgroup driver to use...
	I1207 23:11:57.839690  487084 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:11:57.839733  487084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:11:57.853650  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:11:57.866122  487084 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:11:57.866194  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:11:57.880612  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:11:57.893020  487084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:11:57.971718  487084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:11:58.051170  487084 docker.go:234] disabling docker service ...
	I1207 23:11:58.051240  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:11:58.065815  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:11:58.078071  487084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:11:58.159158  487084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:11:58.241617  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:11:58.253808  487084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:11:58.267810  487084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:11:58.267865  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.276619  487084 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:11:58.276694  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.285159  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.293362  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.301983  487084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:11:58.310270  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.319027  487084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.327563  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.336683  487084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:11:58.344663  487084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:11:58.352591  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:11:58.430723  487084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:11:58.561670  487084 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:11:58.561748  487084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:11:58.565839  487084 start.go:564] Will wait 60s for crictl version
	I1207 23:11:58.565925  487084 ssh_runner.go:195] Run: which crictl
	I1207 23:11:58.569353  487084 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:11:58.593853  487084 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:11:58.593949  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:11:58.621201  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:11:58.650380  487084 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1207 23:11:58.651543  487084 cli_runner.go:164] Run: docker network inspect ha-907658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:11:58.669539  487084 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1207 23:11:58.673718  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:11:58.684392  487084 kubeadm.go:884] updating cluster {Name:ha-907658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:11:58.684550  487084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:11:58.684610  487084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:11:58.716893  487084 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:11:58.716915  487084 crio.go:433] Images already preloaded, skipping extraction
	I1207 23:11:58.717012  487084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:11:58.743428  487084 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:11:58.743474  487084 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:11:58.743483  487084 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1207 23:11:58.743593  487084 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-907658 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:11:58.743655  487084 ssh_runner.go:195] Run: crio config
	I1207 23:11:58.789302  487084 cni.go:84] Creating CNI manager for ""
	I1207 23:11:58.789345  487084 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1207 23:11:58.789368  487084 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 23:11:58.789396  487084 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-907658 NodeName:ha-907658 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:11:58.789521  487084 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-907658"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:11:58.789548  487084 kube-vip.go:115] generating kube-vip config ...
	I1207 23:11:58.789589  487084 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1207 23:11:58.801884  487084 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:11:58.802014  487084 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1207 23:11:58.802092  487084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:11:58.809827  487084 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:11:58.809897  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1207 23:11:58.817290  487084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1207 23:11:58.829895  487084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:11:58.842148  487084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1207 23:11:58.854128  487084 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1207 23:11:58.866494  487084 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1207 23:11:58.870208  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:11:58.879832  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:11:58.957062  487084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:11:58.981696  487084 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658 for IP: 192.168.49.2
	I1207 23:11:58.981720  487084 certs.go:195] generating shared ca certs ...
	I1207 23:11:58.981747  487084 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:58.981923  487084 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:11:58.981976  487084 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:11:58.981990  487084 certs.go:257] generating profile certs ...
	I1207 23:11:58.982095  487084 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key
	I1207 23:11:58.982127  487084 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key.be52f8f7
	I1207 23:11:58.982147  487084 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt.be52f8f7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1207 23:11:59.053446  487084 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt.be52f8f7 ...
	I1207 23:11:59.053484  487084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt.be52f8f7: {Name:mkde9a77ed2ccf374bbd7ef2ab8471222e930ca7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:59.053683  487084 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key.be52f8f7 ...
	I1207 23:11:59.053700  487084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key.be52f8f7: {Name:mkf9f5e1f2966de715814128c39c83c05472c22e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:59.053837  487084 certs.go:382] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt.be52f8f7 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt
	I1207 23:11:59.054023  487084 certs.go:386] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key.be52f8f7 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key
	I1207 23:11:59.054208  487084 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key
	I1207 23:11:59.054223  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1207 23:11:59.054240  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1207 23:11:59.054254  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1207 23:11:59.054268  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1207 23:11:59.054285  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1207 23:11:59.054298  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1207 23:11:59.054315  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1207 23:11:59.054346  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1207 23:11:59.054449  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:11:59.054492  487084 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:11:59.054503  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:11:59.054539  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:11:59.054597  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:11:59.054627  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:11:59.054683  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:11:59.054723  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem -> /usr/share/ca-certificates/393125.pem
	I1207 23:11:59.054754  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /usr/share/ca-certificates/3931252.pem
	I1207 23:11:59.054767  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:11:59.055522  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:11:59.076096  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:11:59.092913  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:11:59.110126  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:11:59.126855  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1207 23:11:59.143407  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 23:11:59.160896  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:11:59.178517  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 23:11:59.196273  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:11:59.213156  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:11:59.230319  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:11:59.247989  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:11:59.259981  487084 ssh_runner.go:195] Run: openssl version
	I1207 23:11:59.265807  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:11:59.273185  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:11:59.280496  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:11:59.284023  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:11:59.284068  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:11:59.318047  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:11:59.325928  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:11:59.332951  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:11:59.340016  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:11:59.343716  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:11:59.343772  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:11:59.377866  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:11:59.386064  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:11:59.393852  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:11:59.401598  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:11:59.405548  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:11:59.405622  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:11:59.439621  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:11:59.447485  487084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:11:59.451341  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 23:11:59.493084  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 23:11:59.535906  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 23:11:59.583567  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 23:11:59.642172  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 23:11:59.681845  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 23:11:59.717892  487084 kubeadm.go:401] StartCluster: {Name:ha-907658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:11:59.718040  487084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:11:59.718122  487084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:11:59.750509  487084 cri.go:89] found id: "86601d9f6ba07c5cc957fcd84ee14c9ed14e0f86e2c332659c8fd9ca9c473cdd"
	I1207 23:11:59.750537  487084 cri.go:89] found id: "3102169518f14fb026edc01e1247ff4c2edc1292fb8d6ddab3310dc29262b65d"
	I1207 23:11:59.750543  487084 cri.go:89] found id: "87abab3f9975c7d1ffa51c90a94a832599db31aa8d9e2e4cdcccfa593c87020f"
	I1207 23:11:59.750548  487084 cri.go:89] found id: "db1d97b6874004dcfa1bfc301e8470ac6e8ab810f5002178c4d64e0899af2340"
	I1207 23:11:59.750560  487084 cri.go:89] found id: "04ab6dc0a72c2fd9ce998abf808c8139e9d16737d96e3dc5573726403cfba770"
	I1207 23:11:59.750567  487084 cri.go:89] found id: ""
	I1207 23:11:59.750620  487084 ssh_runner.go:195] Run: sudo runc list -f json
	W1207 23:11:59.763116  487084 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:11:59Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:11:59.763191  487084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:11:59.771453  487084 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1207 23:11:59.771471  487084 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1207 23:11:59.771524  487084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 23:11:59.778977  487084 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:11:59.779462  487084 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-907658" does not appear in /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:11:59.779590  487084 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-389542/kubeconfig needs updating (will repair): [kubeconfig missing "ha-907658" cluster setting kubeconfig missing "ha-907658" context setting]
	I1207 23:11:59.780044  487084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:59.780730  487084 kapi.go:59] client config for ha-907658: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key", CAFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 23:11:59.781268  487084 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1207 23:11:59.781286  487084 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1207 23:11:59.781293  487084 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1207 23:11:59.781300  487084 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1207 23:11:59.781318  487084 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1207 23:11:59.781314  487084 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1207 23:11:59.781841  487084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 23:11:59.790236  487084 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1207 23:11:59.790262  487084 kubeadm.go:602] duration metric: took 18.784379ms to restartPrimaryControlPlane
	I1207 23:11:59.790272  487084 kubeadm.go:403] duration metric: took 72.393488ms to StartCluster
	I1207 23:11:59.790292  487084 settings.go:142] acquiring lock: {Name:mk372e79badb9c8f25216fa891cff6dfa96ea2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:59.790408  487084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:11:59.791175  487084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:59.791433  487084 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:11:59.791463  487084 start.go:242] waiting for startup goroutines ...
	I1207 23:11:59.791480  487084 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1207 23:11:59.791743  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:11:59.794127  487084 out.go:179] * Enabled addons: 
	I1207 23:11:59.795136  487084 addons.go:530] duration metric: took 3.661252ms for enable addons: enabled=[]
	I1207 23:11:59.795167  487084 start.go:247] waiting for cluster config update ...
	I1207 23:11:59.795178  487084 start.go:256] writing updated cluster config ...
	I1207 23:11:59.796468  487084 out.go:203] 
	I1207 23:11:59.797620  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:11:59.797739  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:11:59.799011  487084 out.go:179] * Starting "ha-907658-m02" control-plane node in "ha-907658" cluster
	I1207 23:11:59.799852  487084 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:11:59.800858  487084 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:11:59.801718  487084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:11:59.801733  487084 cache.go:65] Caching tarball of preloaded images
	I1207 23:11:59.801784  487084 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:11:59.801821  487084 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:11:59.801834  487084 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1207 23:11:59.801944  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:11:59.823527  487084 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:11:59.823550  487084 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:11:59.823570  487084 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:11:59.823603  487084 start.go:360] acquireMachinesLock for ha-907658-m02: {Name:mk6484dd4dfe7ba137d5f583543a1831d27edba5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:11:59.823673  487084 start.go:364] duration metric: took 49.067µs to acquireMachinesLock for "ha-907658-m02"
	I1207 23:11:59.823696  487084 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:11:59.823702  487084 fix.go:54] fixHost starting: m02
	I1207 23:11:59.823927  487084 cli_runner.go:164] Run: docker container inspect ha-907658-m02 --format={{.State.Status}}
	I1207 23:11:59.844560  487084 fix.go:112] recreateIfNeeded on ha-907658-m02: state=Stopped err=<nil>
	W1207 23:11:59.844589  487084 fix.go:138] unexpected machine state, will restart: <nil>
	I1207 23:11:59.846377  487084 out.go:252] * Restarting existing docker container for "ha-907658-m02" ...
	I1207 23:11:59.846453  487084 cli_runner.go:164] Run: docker start ha-907658-m02
	I1207 23:12:00.130224  487084 cli_runner.go:164] Run: docker container inspect ha-907658-m02 --format={{.State.Status}}
	I1207 23:12:00.155491  487084 kic.go:430] container "ha-907658-m02" state is running.
	I1207 23:12:00.155911  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m02
	I1207 23:12:00.178281  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:12:00.178573  487084 machine.go:94] provisionDockerMachine start ...
	I1207 23:12:00.178649  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:00.198614  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:00.198945  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1207 23:12:00.198960  487084 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:12:00.199661  487084 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38884->127.0.0.1:33218: read: connection reset by peer
	I1207 23:12:03.333342  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658-m02
	
	I1207 23:12:03.333382  487084 ubuntu.go:182] provisioning hostname "ha-907658-m02"
	I1207 23:12:03.333446  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:03.352148  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:03.352463  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1207 23:12:03.352484  487084 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-907658-m02 && echo "ha-907658-m02" | sudo tee /etc/hostname
	I1207 23:12:03.505996  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658-m02
	
	I1207 23:12:03.506086  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:03.523096  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:03.523409  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1207 23:12:03.523430  487084 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-907658-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-907658-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-907658-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:12:03.654538  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:12:03.654571  487084 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:12:03.654593  487084 ubuntu.go:190] setting up certificates
	I1207 23:12:03.654607  487084 provision.go:84] configureAuth start
	I1207 23:12:03.654667  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m02
	I1207 23:12:03.678200  487084 provision.go:143] copyHostCerts
	I1207 23:12:03.678248  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:12:03.678285  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:12:03.678297  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:12:03.678397  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:12:03.678500  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:12:03.678535  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:12:03.678546  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:12:03.678587  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:12:03.678657  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:12:03.678682  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:12:03.678690  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:12:03.678715  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:12:03.678770  487084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.ha-907658-m02 san=[127.0.0.1 192.168.49.3 ha-907658-m02 localhost minikube]
	I1207 23:12:03.790264  487084 provision.go:177] copyRemoteCerts
	I1207 23:12:03.790352  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:12:03.790402  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:03.823101  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m02/id_rsa Username:docker}
	I1207 23:12:03.924465  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1207 23:12:03.924539  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:12:03.944485  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1207 23:12:03.944556  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1207 23:12:03.968961  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1207 23:12:03.969036  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 23:12:03.995367  487084 provision.go:87] duration metric: took 340.743667ms to configureAuth
	I1207 23:12:03.995400  487084 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:12:03.995657  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:03.995779  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:04.026533  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:04.026857  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1207 23:12:04.026885  487084 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:12:04.415911  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:12:04.415941  487084 machine.go:97] duration metric: took 4.237351611s to provisionDockerMachine
	I1207 23:12:04.415957  487084 start.go:293] postStartSetup for "ha-907658-m02" (driver="docker")
	I1207 23:12:04.415971  487084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:12:04.416028  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:12:04.416078  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:04.434685  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m02/id_rsa Username:docker}
	I1207 23:12:04.530207  487084 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:12:04.533967  487084 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:12:04.533999  487084 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:12:04.534014  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:12:04.534066  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:12:04.534139  487084 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:12:04.534149  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /etc/ssl/certs/3931252.pem
	I1207 23:12:04.534230  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:12:04.542117  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:12:04.560472  487084 start.go:296] duration metric: took 144.495639ms for postStartSetup
	I1207 23:12:04.560570  487084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:12:04.560625  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:04.577649  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m02/id_rsa Username:docker}
	I1207 23:12:04.669363  487084 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:12:04.674346  487084 fix.go:56] duration metric: took 4.85062394s for fixHost
	I1207 23:12:04.674372  487084 start.go:83] releasing machines lock for "ha-907658-m02", held for 4.850686194s
	I1207 23:12:04.674436  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m02
	I1207 23:12:04.693901  487084 out.go:179] * Found network options:
	I1207 23:12:04.695122  487084 out.go:179]   - NO_PROXY=192.168.49.2
	W1207 23:12:04.696299  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	W1207 23:12:04.696348  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	I1207 23:12:04.696432  487084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:12:04.696482  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:04.696491  487084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:12:04.696545  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:04.715832  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m02/id_rsa Username:docker}
	I1207 23:12:04.716229  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m02/id_rsa Username:docker}
	I1207 23:12:04.880414  487084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:12:04.885363  487084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:12:04.885437  487084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:12:04.893312  487084 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 23:12:04.893347  487084 start.go:496] detecting cgroup driver to use...
	I1207 23:12:04.893386  487084 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:12:04.893433  487084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:12:04.908112  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:12:04.920708  487084 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:12:04.920806  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:12:04.935538  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:12:04.948970  487084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:12:05.093803  487084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:12:05.237498  487084 docker.go:234] disabling docker service ...
	I1207 23:12:05.237578  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:12:05.255362  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:12:05.271477  487084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:12:05.401811  487084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:12:05.532521  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:12:05.547785  487084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:12:05.566033  487084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:12:05.566094  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.577067  487084 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:12:05.577126  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.589050  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.599566  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.609984  487084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:12:05.619430  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.632001  487084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.642199  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.652617  487084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:12:05.661297  487084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:12:05.671605  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:05.817088  487084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:12:06.027922  487084 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:12:06.027991  487084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:12:06.032083  487084 start.go:564] Will wait 60s for crictl version
	I1207 23:12:06.032144  487084 ssh_runner.go:195] Run: which crictl
	I1207 23:12:06.035913  487084 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:12:06.060174  487084 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:12:06.060268  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:12:06.088918  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:12:06.119010  487084 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1207 23:12:06.120321  487084 out.go:179]   - env NO_PROXY=192.168.49.2
	I1207 23:12:06.121801  487084 cli_runner.go:164] Run: docker network inspect ha-907658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:12:06.139719  487084 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1207 23:12:06.143993  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:12:06.155217  487084 mustload.go:66] Loading cluster: ha-907658
	I1207 23:12:06.155433  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:06.155653  487084 cli_runner.go:164] Run: docker container inspect ha-907658 --format={{.State.Status}}
	I1207 23:12:06.173920  487084 host.go:66] Checking if "ha-907658" exists ...
	I1207 23:12:06.174154  487084 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658 for IP: 192.168.49.3
	I1207 23:12:06.174165  487084 certs.go:195] generating shared ca certs ...
	I1207 23:12:06.174179  487084 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:12:06.174311  487084 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:12:06.174381  487084 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:12:06.174397  487084 certs.go:257] generating profile certs ...
	I1207 23:12:06.174493  487084 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key
	I1207 23:12:06.174583  487084 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key.39a0badd
	I1207 23:12:06.174639  487084 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key
	I1207 23:12:06.174654  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1207 23:12:06.174671  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1207 23:12:06.174693  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1207 23:12:06.174708  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1207 23:12:06.174722  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1207 23:12:06.174739  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1207 23:12:06.174753  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1207 23:12:06.174772  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1207 23:12:06.174836  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:12:06.174877  487084 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:12:06.174891  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:12:06.174926  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:12:06.174963  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:12:06.174996  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:12:06.175052  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:12:06.175095  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /usr/share/ca-certificates/3931252.pem
	I1207 23:12:06.175115  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:06.175131  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem -> /usr/share/ca-certificates/393125.pem
	I1207 23:12:06.175194  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:12:06.197420  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:12:06.283673  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1207 23:12:06.290449  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1207 23:12:06.302775  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1207 23:12:06.308469  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1207 23:12:06.317835  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1207 23:12:06.321609  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1207 23:12:06.330066  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1207 23:12:06.333816  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1207 23:12:06.345628  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1207 23:12:06.352380  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1207 23:12:06.360869  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1207 23:12:06.364787  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1207 23:12:06.374104  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:12:06.394705  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:12:06.413194  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:12:06.432115  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:12:06.449406  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1207 23:12:06.466917  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 23:12:06.498654  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:12:06.528737  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 23:12:06.546449  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:12:06.564005  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:12:06.582815  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:12:06.601666  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1207 23:12:06.615105  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1207 23:12:06.631379  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1207 23:12:06.646798  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1207 23:12:06.659864  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1207 23:12:06.675256  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1207 23:12:06.690795  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1207 23:12:06.705444  487084 ssh_runner.go:195] Run: openssl version
	I1207 23:12:06.712063  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:12:06.720029  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:12:06.728834  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:12:06.733304  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:12:06.733391  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:12:06.771128  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:12:06.779038  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:06.787058  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:12:06.794858  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:06.798600  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:06.798662  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:06.834714  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:12:06.842519  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:12:06.849816  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:12:06.857109  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:12:06.860827  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:12:06.860876  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:12:06.901264  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:12:06.909596  487084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:12:06.913535  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 23:12:06.953706  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 23:12:06.990023  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 23:12:07.024365  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 23:12:07.059478  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 23:12:07.093656  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 23:12:07.130433  487084 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1207 23:12:07.130566  487084 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-907658-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:12:07.130596  487084 kube-vip.go:115] generating kube-vip config ...
	I1207 23:12:07.130647  487084 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1207 23:12:07.142960  487084 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:12:07.143037  487084 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1207 23:12:07.143109  487084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:12:07.151538  487084 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:12:07.151608  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1207 23:12:07.159652  487084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1207 23:12:07.172062  487084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:12:07.184591  487084 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1207 23:12:07.197988  487084 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1207 23:12:07.201949  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:12:07.212295  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:07.335873  487084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:12:07.349280  487084 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:12:07.349636  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:07.351992  487084 out.go:179] * Verifying Kubernetes components...
	I1207 23:12:07.353164  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:07.482271  487084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:12:07.495426  487084 kapi.go:59] client config for ha-907658: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key", CAFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1207 23:12:07.495497  487084 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1207 23:12:07.495703  487084 node_ready.go:35] waiting up to 6m0s for node "ha-907658-m02" to be "Ready" ...
	I1207 23:12:07.504809  487084 node_ready.go:49] node "ha-907658-m02" is "Ready"
	I1207 23:12:07.504835  487084 node_ready.go:38] duration metric: took 9.118175ms for node "ha-907658-m02" to be "Ready" ...
	I1207 23:12:07.504849  487084 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:12:07.504891  487084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:12:07.517382  487084 api_server.go:72] duration metric: took 168.030727ms to wait for apiserver process to appear ...
	I1207 23:12:07.517409  487084 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:12:07.517436  487084 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1207 23:12:07.523117  487084 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1207 23:12:07.524187  487084 api_server.go:141] control plane version: v1.34.2
	I1207 23:12:07.524214  487084 api_server.go:131] duration metric: took 6.79771ms to wait for apiserver health ...
	I1207 23:12:07.524224  487084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:12:07.530960  487084 system_pods.go:59] 26 kube-system pods found
	I1207 23:12:07.531007  487084 system_pods.go:61] "coredns-66bc5c9577-7lkd8" [87d8dbef-c05d-4fcd-b08e-4ee6bce689ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:12:07.531030  487084 system_pods.go:61] "coredns-66bc5c9577-j9lqh" [50fb7869-af19-4fe4-a49d-bf8431faa47e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:12:07.531045  487084 system_pods.go:61] "etcd-ha-907658" [a1045f46-63e5-4adf-8cba-698626661685] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:12:07.531055  487084 system_pods.go:61] "etcd-ha-907658-m02" [e0fd4196-c559-4ed5-a866-f2edca5d028b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:12:07.531065  487084 system_pods.go:61] "etcd-ha-907658-m03" [ec660b37-46e0-4ea6-8143-43a215cb208e] Running
	I1207 23:12:07.531077  487084 system_pods.go:61] "kindnet-5lg58" [595946fb-4b57-4869-85e2-75debf3486ae] Running
	I1207 23:12:07.531082  487084 system_pods.go:61] "kindnet-9rqhs" [78003a20-15f9-43e0-8a11-9c215ade326b] Running
	I1207 23:12:07.531086  487084 system_pods.go:61] "kindnet-hzfvq" [8c0ef1d7-39de-46ce-b16b-4d2794e7dc20] Running
	I1207 23:12:07.531090  487084 system_pods.go:61] "kindnet-wvnmz" [464814b4-64d5-4cae-b298-44186fe9b844] Running
	I1207 23:12:07.531102  487084 system_pods.go:61] "kube-apiserver-ha-907658" [746157f2-b5d4-4a22-b0d0-e186dba5c022] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:12:07.531114  487084 system_pods.go:61] "kube-apiserver-ha-907658-m02" [69e1f1f9-cc80-4383-8bf2-cd362ab2fc9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:12:07.531122  487084 system_pods.go:61] "kube-apiserver-ha-907658-m03" [6dd58630-2169-4539-b8eb-d9971aef28c0] Running
	I1207 23:12:07.531128  487084 system_pods.go:61] "kube-controller-manager-ha-907658" [86717111-1edd-4e7d-bd64-87a0b751fd53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:12:07.531132  487084 system_pods.go:61] "kube-controller-manager-ha-907658-m02" [2edf59bb-e62d-4897-9d2f-6a454cc72644] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:12:07.531138  487084 system_pods.go:61] "kube-controller-manager-ha-907658-m03" [87b33e73-dedd-477d-87fa-42e198df84ba] Running
	I1207 23:12:07.531141  487084 system_pods.go:61] "kube-proxy-8fwsf" [1d7267ee-074b-40da-bfe0-4b434d732d8c] Running
	I1207 23:12:07.531147  487084 system_pods.go:61] "kube-proxy-b8vz9" [cd4b68a6-4528-4644-bac6-158d1bffd0ed] Running
	I1207 23:12:07.531150  487084 system_pods.go:61] "kube-proxy-r5c77" [c0ba957f-b2b5-4e7a-b93a-b3619c1e4cf9] Running
	I1207 23:12:07.531153  487084 system_pods.go:61] "kube-proxy-sdhd8" [55e62bf1-af57-4c34-925a-c44c47ce32ce] Running
	I1207 23:12:07.531157  487084 system_pods.go:61] "kube-scheduler-ha-907658" [16a4e936-d293-4107-b559-200f764f7dd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:12:07.531164  487084 system_pods.go:61] "kube-scheduler-ha-907658-m02" [85e3e5a5-fe1f-4994-90d4-c4e42a5a887f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:12:07.531175  487084 system_pods.go:61] "kube-scheduler-ha-907658-m03" [ca765146-fd0b-4cc8-9f6e-55e2601a5033] Running
	I1207 23:12:07.531178  487084 system_pods.go:61] "kube-vip-ha-907658" [2fc8fc0b-3f23-44d1-909a-20f06169c8dd] Running
	I1207 23:12:07.531181  487084 system_pods.go:61] "kube-vip-ha-907658-m02" [53a8762d-c686-486f-9814-2f40e4ff3306] Running
	I1207 23:12:07.531184  487084 system_pods.go:61] "kube-vip-ha-907658-m03" [6bc4a730-7a65-43a8-a746-2bc3ffa9ccc8] Running
	I1207 23:12:07.531186  487084 system_pods.go:61] "storage-provisioner" [5e80f8de-afe9-4c94-997c-c06f5ff985db] Running
	I1207 23:12:07.531192  487084 system_pods.go:74] duration metric: took 6.96154ms to wait for pod list to return data ...
	I1207 23:12:07.531202  487084 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:12:07.533477  487084 default_sa.go:45] found service account: "default"
	I1207 23:12:07.533501  487084 default_sa.go:55] duration metric: took 2.292892ms for default service account to be created ...
	I1207 23:12:07.533508  487084 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 23:12:07.539025  487084 system_pods.go:86] 26 kube-system pods found
	I1207 23:12:07.539051  487084 system_pods.go:89] "coredns-66bc5c9577-7lkd8" [87d8dbef-c05d-4fcd-b08e-4ee6bce689ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:12:07.539059  487084 system_pods.go:89] "coredns-66bc5c9577-j9lqh" [50fb7869-af19-4fe4-a49d-bf8431faa47e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:12:07.539067  487084 system_pods.go:89] "etcd-ha-907658" [a1045f46-63e5-4adf-8cba-698626661685] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:12:07.539072  487084 system_pods.go:89] "etcd-ha-907658-m02" [e0fd4196-c559-4ed5-a866-f2edca5d028b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:12:07.539076  487084 system_pods.go:89] "etcd-ha-907658-m03" [ec660b37-46e0-4ea6-8143-43a215cb208e] Running
	I1207 23:12:07.539080  487084 system_pods.go:89] "kindnet-5lg58" [595946fb-4b57-4869-85e2-75debf3486ae] Running
	I1207 23:12:07.539083  487084 system_pods.go:89] "kindnet-9rqhs" [78003a20-15f9-43e0-8a11-9c215ade326b] Running
	I1207 23:12:07.539087  487084 system_pods.go:89] "kindnet-hzfvq" [8c0ef1d7-39de-46ce-b16b-4d2794e7dc20] Running
	I1207 23:12:07.539090  487084 system_pods.go:89] "kindnet-wvnmz" [464814b4-64d5-4cae-b298-44186fe9b844] Running
	I1207 23:12:07.539097  487084 system_pods.go:89] "kube-apiserver-ha-907658" [746157f2-b5d4-4a22-b0d0-e186dba5c022] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:12:07.539105  487084 system_pods.go:89] "kube-apiserver-ha-907658-m02" [69e1f1f9-cc80-4383-8bf2-cd362ab2fc9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:12:07.539109  487084 system_pods.go:89] "kube-apiserver-ha-907658-m03" [6dd58630-2169-4539-b8eb-d9971aef28c0] Running
	I1207 23:12:07.539118  487084 system_pods.go:89] "kube-controller-manager-ha-907658" [86717111-1edd-4e7d-bd64-87a0b751fd53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:12:07.539123  487084 system_pods.go:89] "kube-controller-manager-ha-907658-m02" [2edf59bb-e62d-4897-9d2f-6a454cc72644] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:12:07.539127  487084 system_pods.go:89] "kube-controller-manager-ha-907658-m03" [87b33e73-dedd-477d-87fa-42e198df84ba] Running
	I1207 23:12:07.539130  487084 system_pods.go:89] "kube-proxy-8fwsf" [1d7267ee-074b-40da-bfe0-4b434d732d8c] Running
	I1207 23:12:07.539139  487084 system_pods.go:89] "kube-proxy-b8vz9" [cd4b68a6-4528-4644-bac6-158d1bffd0ed] Running
	I1207 23:12:07.539144  487084 system_pods.go:89] "kube-proxy-r5c77" [c0ba957f-b2b5-4e7a-b93a-b3619c1e4cf9] Running
	I1207 23:12:07.539153  487084 system_pods.go:89] "kube-proxy-sdhd8" [55e62bf1-af57-4c34-925a-c44c47ce32ce] Running
	I1207 23:12:07.539159  487084 system_pods.go:89] "kube-scheduler-ha-907658" [16a4e936-d293-4107-b559-200f764f7dd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:12:07.539164  487084 system_pods.go:89] "kube-scheduler-ha-907658-m02" [85e3e5a5-fe1f-4994-90d4-c4e42a5a887f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:12:07.539167  487084 system_pods.go:89] "kube-scheduler-ha-907658-m03" [ca765146-fd0b-4cc8-9f6e-55e2601a5033] Running
	I1207 23:12:07.539171  487084 system_pods.go:89] "kube-vip-ha-907658" [2fc8fc0b-3f23-44d1-909a-20f06169c8dd] Running
	I1207 23:12:07.539174  487084 system_pods.go:89] "kube-vip-ha-907658-m02" [53a8762d-c686-486f-9814-2f40e4ff3306] Running
	I1207 23:12:07.539176  487084 system_pods.go:89] "kube-vip-ha-907658-m03" [6bc4a730-7a65-43a8-a746-2bc3ffa9ccc8] Running
	I1207 23:12:07.539181  487084 system_pods.go:89] "storage-provisioner" [5e80f8de-afe9-4c94-997c-c06f5ff985db] Running
	I1207 23:12:07.539191  487084 system_pods.go:126] duration metric: took 5.677775ms to wait for k8s-apps to be running ...
	I1207 23:12:07.539200  487084 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:12:07.539244  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:12:07.552415  487084 system_svc.go:56] duration metric: took 13.204195ms WaitForService to wait for kubelet
	I1207 23:12:07.552445  487084 kubeadm.go:587] duration metric: took 203.099861ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:12:07.552461  487084 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:12:07.556717  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:07.556763  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:07.556789  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:07.556794  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:07.556800  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:07.556804  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:07.556815  487084 node_conditions.go:105] duration metric: took 4.343663ms to run NodePressure ...
	I1207 23:12:07.556830  487084 start.go:242] waiting for startup goroutines ...
	I1207 23:12:07.556864  487084 start.go:256] writing updated cluster config ...
	I1207 23:12:07.559024  487084 out.go:203] 
	I1207 23:12:07.560420  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:07.560527  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:12:07.562073  487084 out.go:179] * Starting "ha-907658-m04" worker node in "ha-907658" cluster
	I1207 23:12:07.563315  487084 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:12:07.564547  487084 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:12:07.565586  487084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:12:07.565600  487084 cache.go:65] Caching tarball of preloaded images
	I1207 23:12:07.565653  487084 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:12:07.565684  487084 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:12:07.565695  487084 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1207 23:12:07.565787  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:12:07.585455  487084 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:12:07.585473  487084 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:12:07.585488  487084 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:12:07.585525  487084 start.go:360] acquireMachinesLock for ha-907658-m04: {Name:mkbf928fa5c7c7d65c3e97ec1b1d2c403a4aafbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:12:07.585593  487084 start.go:364] duration metric: took 46.24µs to acquireMachinesLock for "ha-907658-m04"
	I1207 23:12:07.585618  487084 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:12:07.585630  487084 fix.go:54] fixHost starting: m04
	I1207 23:12:07.585905  487084 cli_runner.go:164] Run: docker container inspect ha-907658-m04 --format={{.State.Status}}
	I1207 23:12:07.603987  487084 fix.go:112] recreateIfNeeded on ha-907658-m04: state=Stopped err=<nil>
	W1207 23:12:07.604014  487084 fix.go:138] unexpected machine state, will restart: <nil>
	I1207 23:12:07.605765  487084 out.go:252] * Restarting existing docker container for "ha-907658-m04" ...
	I1207 23:12:07.605839  487084 cli_runner.go:164] Run: docker start ha-907658-m04
	I1207 23:12:07.853178  487084 cli_runner.go:164] Run: docker container inspect ha-907658-m04 --format={{.State.Status}}
	I1207 23:12:07.874755  487084 kic.go:430] container "ha-907658-m04" state is running.
	I1207 23:12:07.875212  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m04
	I1207 23:12:07.896653  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:12:07.897024  487084 machine.go:94] provisionDockerMachine start ...
	I1207 23:12:07.897151  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:07.918923  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:07.919195  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1207 23:12:07.919216  487084 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:12:07.919824  487084 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49894->127.0.0.1:33223: read: connection reset by peer
	I1207 23:12:11.048469  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658-m04
	
	I1207 23:12:11.048499  487084 ubuntu.go:182] provisioning hostname "ha-907658-m04"
	I1207 23:12:11.048563  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:11.066447  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:11.066738  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1207 23:12:11.066753  487084 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-907658-m04 && echo "ha-907658-m04" | sudo tee /etc/hostname
	I1207 23:12:11.206276  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658-m04
	
	I1207 23:12:11.206388  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:11.225667  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:11.225909  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1207 23:12:11.225925  487084 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-907658-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-907658-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-907658-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:12:11.355703  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:12:11.355747  487084 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:12:11.355789  487084 ubuntu.go:190] setting up certificates
	I1207 23:12:11.355803  487084 provision.go:84] configureAuth start
	I1207 23:12:11.355885  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m04
	I1207 23:12:11.374837  487084 provision.go:143] copyHostCerts
	I1207 23:12:11.374879  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:12:11.374918  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:12:11.374932  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:12:11.375021  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:12:11.375125  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:12:11.375155  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:12:11.375165  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:12:11.375205  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:12:11.375256  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:12:11.375278  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:12:11.375284  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:12:11.375321  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:12:11.375435  487084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.ha-907658-m04 san=[127.0.0.1 192.168.49.5 ha-907658-m04 localhost minikube]
	I1207 23:12:11.430934  487084 provision.go:177] copyRemoteCerts
	I1207 23:12:11.431006  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:12:11.431063  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:11.449187  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m04/id_rsa Username:docker}
	I1207 23:12:11.543515  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1207 23:12:11.543582  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1207 23:12:11.562188  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1207 23:12:11.562249  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 23:12:11.579970  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1207 23:12:11.580024  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:12:11.597607  487084 provision.go:87] duration metric: took 241.785948ms to configureAuth
	I1207 23:12:11.597642  487084 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:12:11.597863  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:11.597964  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:11.616041  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:11.616267  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1207 23:12:11.616282  487084 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:12:11.900554  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:12:11.900587  487084 machine.go:97] duration metric: took 4.00354246s to provisionDockerMachine
	I1207 23:12:11.900600  487084 start.go:293] postStartSetup for "ha-907658-m04" (driver="docker")
	I1207 23:12:11.900611  487084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:12:11.900667  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:12:11.900705  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:11.919920  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m04/id_rsa Username:docker}
	I1207 23:12:12.015993  487084 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:12:12.019664  487084 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:12:12.019701  487084 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:12:12.019713  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:12:12.019773  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:12:12.019880  487084 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:12:12.019892  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /etc/ssl/certs/3931252.pem
	I1207 23:12:12.020003  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:12:12.028252  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:12:12.045963  487084 start.go:296] duration metric: took 145.345162ms for postStartSetup
	I1207 23:12:12.046054  487084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:12:12.046100  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:12.064419  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m04/id_rsa Username:docker}
	I1207 23:12:12.155615  487084 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:12:12.160279  487084 fix.go:56] duration metric: took 4.57464273s for fixHost
	I1207 23:12:12.160305  487084 start.go:83] releasing machines lock for "ha-907658-m04", held for 4.574698172s
	I1207 23:12:12.160388  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m04
	I1207 23:12:12.180857  487084 out.go:179] * Found network options:
	I1207 23:12:12.182145  487084 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1207 23:12:12.183173  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	W1207 23:12:12.183195  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	W1207 23:12:12.183220  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	W1207 23:12:12.183237  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	I1207 23:12:12.183304  487084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:12:12.183368  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:12.183387  487084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:12:12.183450  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:12.203407  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m04/id_rsa Username:docker}
	I1207 23:12:12.203844  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m04/id_rsa Username:docker}
	I1207 23:12:12.357625  487084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:12:12.362541  487084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:12:12.362619  487084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:12:12.370757  487084 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 23:12:12.370785  487084 start.go:496] detecting cgroup driver to use...
	I1207 23:12:12.370818  487084 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:12:12.370864  487084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:12:12.385478  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:12:12.398446  487084 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:12:12.398518  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:12:12.413312  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:12:12.425964  487084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:12:12.508240  487084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:12:12.594377  487084 docker.go:234] disabling docker service ...
	I1207 23:12:12.594469  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:12:12.609287  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:12:12.621518  487084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:12:12.706445  487084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:12:12.788828  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:12:12.801567  487084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:12:12.815799  487084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:12:12.815866  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.824631  487084 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:12:12.824701  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.834415  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.843435  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.852233  487084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:12:12.861003  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.870357  487084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.879159  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.888283  487084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:12:12.896022  487084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:12:12.903097  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:12.988157  487084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:12:13.133593  487084 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:12:13.133671  487084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:12:13.137843  487084 start.go:564] Will wait 60s for crictl version
	I1207 23:12:13.137917  487084 ssh_runner.go:195] Run: which crictl
	I1207 23:12:13.141433  487084 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:12:13.167512  487084 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:12:13.167597  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:12:13.199036  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:12:13.229455  487084 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1207 23:12:13.230791  487084 out.go:179]   - env NO_PROXY=192.168.49.2
	I1207 23:12:13.232057  487084 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1207 23:12:13.233540  487084 cli_runner.go:164] Run: docker network inspect ha-907658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:12:13.250726  487084 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1207 23:12:13.254740  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:12:13.265197  487084 mustload.go:66] Loading cluster: ha-907658
	I1207 23:12:13.265455  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:13.265697  487084 cli_runner.go:164] Run: docker container inspect ha-907658 --format={{.State.Status}}
	I1207 23:12:13.284748  487084 host.go:66] Checking if "ha-907658" exists ...
	I1207 23:12:13.285028  487084 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658 for IP: 192.168.49.5
	I1207 23:12:13.285041  487084 certs.go:195] generating shared ca certs ...
	I1207 23:12:13.285056  487084 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:12:13.285200  487084 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:12:13.285261  487084 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:12:13.285280  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1207 23:12:13.285300  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1207 23:12:13.285317  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1207 23:12:13.285349  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1207 23:12:13.285417  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:12:13.285460  487084 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:12:13.285474  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:12:13.285512  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:12:13.285554  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:12:13.285592  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:12:13.285658  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:12:13.285698  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:13.285722  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem -> /usr/share/ca-certificates/393125.pem
	I1207 23:12:13.285741  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /usr/share/ca-certificates/3931252.pem
	I1207 23:12:13.285769  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:12:13.304120  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:12:13.322222  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:12:13.340050  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:12:13.357784  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:12:13.376383  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:12:13.395635  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:12:13.413473  487084 ssh_runner.go:195] Run: openssl version
	I1207 23:12:13.419754  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:13.427021  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:12:13.434993  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:13.439202  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:13.439267  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:13.473339  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:12:13.481399  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:12:13.488584  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:12:13.495734  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:12:13.499349  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:12:13.499394  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:12:13.534119  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:12:13.542358  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:12:13.550110  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:12:13.557923  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:12:13.561771  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:12:13.561821  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:12:13.600731  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:12:13.608915  487084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:12:13.612836  487084 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1207 23:12:13.612892  487084 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.2  false true} ...
	I1207 23:12:13.613000  487084 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-907658-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:12:13.613071  487084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:12:13.620905  487084 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:12:13.620964  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1207 23:12:13.628840  487084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1207 23:12:13.642519  487084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:12:13.655821  487084 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1207 23:12:13.660403  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:12:13.672258  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:13.756400  487084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:12:13.769720  487084 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1207 23:12:13.770008  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:13.772651  487084 out.go:179] * Verifying Kubernetes components...
	I1207 23:12:13.773857  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:13.857293  487084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:12:13.870886  487084 kapi.go:59] client config for ha-907658: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key", CAFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1207 23:12:13.870958  487084 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1207 23:12:13.871160  487084 node_ready.go:35] waiting up to 6m0s for node "ha-907658-m04" to be "Ready" ...
	I1207 23:12:13.874196  487084 node_ready.go:49] node "ha-907658-m04" is "Ready"
	I1207 23:12:13.874220  487084 node_ready.go:38] duration metric: took 3.046821ms for node "ha-907658-m04" to be "Ready" ...
	I1207 23:12:13.874233  487084 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:12:13.874273  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:12:13.886840  487084 system_svc.go:56] duration metric: took 12.598168ms WaitForService to wait for kubelet
	I1207 23:12:13.886868  487084 kubeadm.go:587] duration metric: took 117.090427ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:12:13.886885  487084 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:12:13.890337  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:13.890362  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:13.890375  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:13.890380  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:13.890386  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:13.890392  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:13.890400  487084 node_conditions.go:105] duration metric: took 3.509832ms to run NodePressure ...
	I1207 23:12:13.890416  487084 start.go:242] waiting for startup goroutines ...
	I1207 23:12:13.890446  487084 start.go:256] writing updated cluster config ...
	I1207 23:12:13.890792  487084 ssh_runner.go:195] Run: rm -f paused
	I1207 23:12:13.894562  487084 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:12:13.895171  487084 kapi.go:59] client config for ha-907658: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key", CAFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 23:12:13.903646  487084 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7lkd8" in "kube-system" namespace to be "Ready" or be gone ...
	W1207 23:12:15.910233  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:17.910533  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:20.410624  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:22.909833  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:25.410696  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:27.909729  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:29.911016  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:32.410597  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:34.410833  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:36.909456  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:38.911942  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:41.410807  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:43.910363  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:46.411526  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:48.911050  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:51.412217  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:53.910759  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:56.410211  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:58.410607  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:00.411373  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:02.910918  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:05.409687  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:07.409957  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:09.910681  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:12.410492  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:14.410764  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:16.909949  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:18.910470  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:20.911090  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:23.410279  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:25.910548  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:27.910666  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:30.410084  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:32.410161  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:34.411051  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:36.910027  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:39.410570  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:41.909517  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:43.910651  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:46.409768  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:48.410760  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:50.910511  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:52.910970  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:55.410193  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:57.410684  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:59.911085  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:01.911298  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:04.410828  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:06.910004  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:08.910803  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:11.410260  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:13.410549  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:15.911180  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:18.410236  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:20.910248  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:23.410312  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:25.909481  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:27.910308  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:29.910475  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:32.410112  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:34.910739  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:37.410174  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:39.410772  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:41.910812  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:44.409997  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:46.410369  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:48.910126  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:50.910698  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:53.410089  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:55.410604  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:57.910049  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:59.910503  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:02.409755  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:04.909540  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:06.910504  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:09.409997  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:11.411142  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:13.910274  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:16.410995  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:18.909895  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:20.909974  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:22.910657  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:25.410074  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:27.410196  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:29.410456  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:31.910828  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:34.410231  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:36.410432  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:38.909644  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:40.910092  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:42.910856  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:45.409802  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:47.410082  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:49.410149  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:51.910490  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:54.409927  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:56.410532  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:58.909671  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:00.910288  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:02.910545  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:05.410175  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:07.909887  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:09.910041  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:11.910457  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	I1207 23:16:13.895206  487084 pod_ready.go:86] duration metric: took 3m59.991503796s for pod "coredns-66bc5c9577-7lkd8" in "kube-system" namespace to be "Ready" or be gone ...
	W1207 23:16:13.895245  487084 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-dns" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1207 23:16:13.895263  487084 pod_ready.go:40] duration metric: took 4m0.000670566s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:16:13.897256  487084 out.go:203] 
	W1207 23:16:13.898559  487084 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1207 23:16:13.899846  487084 out.go:203] 
	
	
	==> CRI-O <==
	Dec 07 23:12:03 ha-907658 crio[574]: time="2025-12-07T23:12:03.419320979Z" level=info msg="Started container" PID=1066 containerID=59632406be56295008167128b06b3d246e8cb935a790ce61ab27d7c9a0210c7a description=default/busybox-7b57f96db7-wts8f/busybox id=7b19c8e0-1b80-4d6a-a660-59d86bda3787 name=/runtime.v1.RuntimeService/StartContainer sandboxID=974bf02e23133aac017f3d339f396c28ca8b3d88a654f87bb690e5359126f72a
	Dec 07 23:12:03 ha-907658 crio[574]: time="2025-12-07T23:12:03.42219102Z" level=info msg="Created container b66756d6bf8454e51e71c9a010e9f000c2d6f65f4202832cc7a3a3bf546e9566: kube-system/kube-proxy-r5c77/kube-proxy" id=6c2d44d8-af9b-488e-a8fa-96cfda6ad07e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:12:03 ha-907658 crio[574]: time="2025-12-07T23:12:03.422764701Z" level=info msg="Starting container: b66756d6bf8454e51e71c9a010e9f000c2d6f65f4202832cc7a3a3bf546e9566" id=f4e610f6-9234-460c-ab15-e7f9e1e22236 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:12:03 ha-907658 crio[574]: time="2025-12-07T23:12:03.423187163Z" level=info msg="Created container c6e4a88e898128e18b3156f394f70fd2b7676c0a3014577d38064cdc4c08e233: default/busybox-7b57f96db7-dslrx/busybox" id=947f78d0-ea74-4827-abe4-b36a0b7703f5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:12:03 ha-907658 crio[574]: time="2025-12-07T23:12:03.423803868Z" level=info msg="Starting container: c6e4a88e898128e18b3156f394f70fd2b7676c0a3014577d38064cdc4c08e233" id=f8c5be5c-7fca-4d32-8a6c-68008559df07 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:12:03 ha-907658 crio[574]: time="2025-12-07T23:12:03.425692066Z" level=info msg="Started container" PID=1071 containerID=c6e4a88e898128e18b3156f394f70fd2b7676c0a3014577d38064cdc4c08e233 description=default/busybox-7b57f96db7-dslrx/busybox id=f8c5be5c-7fca-4d32-8a6c-68008559df07 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fee9745be2801cab826368bca687acad119bd0bddcf3bddfe083e1bc37ec0a2e
	Dec 07 23:12:03 ha-907658 crio[574]: time="2025-12-07T23:12:03.425952275Z" level=info msg="Started container" PID=1065 containerID=b66756d6bf8454e51e71c9a010e9f000c2d6f65f4202832cc7a3a3bf546e9566 description=kube-system/kube-proxy-r5c77/kube-proxy id=f4e610f6-9234-460c-ab15-e7f9e1e22236 name=/runtime.v1.RuntimeService/StartContainer sandboxID=81d062f869179dcf8073b42df610726a49898283cc3b7b1c4382936f244009bc
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.828232315Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.832561313Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.832595738Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.832614781Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.836515238Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.836547213Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.836564322Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.840132316Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.840156246Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.840172174Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.844126033Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.844147287Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.8441679Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.847881335Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.84790256Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.847918681Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.851426018Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.851446887Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	c6e4a88e89812       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   4 minutes ago       Running             busybox                   2                   fee9745be2801       busybox-7b57f96db7-dslrx            default
	59632406be562       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   4 minutes ago       Running             busybox                   2                   974bf02e23133       busybox-7b57f96db7-wts8f            default
	b66756d6bf845       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   4 minutes ago       Running             kube-proxy                0                   81d062f869179       kube-proxy-r5c77                    kube-system
	6e24622fde46e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 minutes ago       Running             kindnet-cni               0                   91e6c1a0bfdf0       kindnet-hzfvq                       kube-system
	86601d9f6ba07       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   4 minutes ago       Running             kube-controller-manager   0                   b67664be25ec4       kube-controller-manager-ha-907658   kube-system
	3102169518f14       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   4 minutes ago       Running             etcd                      0                   54905301bb684       etcd-ha-907658                      kube-system
	87abab3f9975c       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   4 minutes ago       Running             kube-apiserver            0                   56a831ff3eb23       kube-apiserver-ha-907658            kube-system
	db1d97b687400       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   4 minutes ago       Running             kube-scheduler            0                   cae40eeeedff8       kube-scheduler-ha-907658            kube-system
	04ab6dc0a72c2       6a2e30457bbed0ffdc161ff0131dfcfe9099692717f3d1bcae88b9db3d5a033c   4 minutes ago       Running             kube-vip                  0                   a3d8fbda9f509       kube-vip-ha-907658                  kube-system
	
	
	==> describe nodes <==
	Name:               ha-907658
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-907658
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=ha-907658
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_06_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:06:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-907658
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:16:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:16:07 +0000   Sun, 07 Dec 2025 23:06:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:16:07 +0000   Sun, 07 Dec 2025 23:06:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:16:07 +0000   Sun, 07 Dec 2025 23:06:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:16:07 +0000   Sun, 07 Dec 2025 23:07:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-907658
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                f44bac47-757c-4c31-8a75-ef9ebb40422e
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-dslrx             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m3s
	  default                     busybox-7b57f96db7-wts8f             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m3s
	  kube-system                 etcd-ha-907658                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m34s
	  kube-system                 kindnet-hzfvq                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m29s
	  kube-system                 kube-apiserver-ha-907658             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m34s
	  kube-system                 kube-controller-manager-ha-907658    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m34s
	  kube-system                 kube-proxy-r5c77                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m29s
	  kube-system                 kube-scheduler-ha-907658             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m35s
	  kube-system                 kube-vip-ha-907658                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m11s                  kube-proxy       
	  Normal  Starting                 5m37s                  kube-proxy       
	  Normal  Starting                 9m27s                  kube-proxy       
	  Normal  Starting                 9m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     9m34s                  kubelet          Node ha-907658 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    9m34s                  kubelet          Node ha-907658 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  9m34s                  kubelet          Node ha-907658 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           9m30s                  node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  RegisteredNode           9m8s                   node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  NodeReady                8m47s                  kubelet          Node ha-907658 status is now: NodeReady
	  Normal  RegisteredNode           8m38s                  node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  RegisteredNode           6m49s                  node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  Starting                 5m53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     5m52s (x8 over 5m53s)  kubelet          Node ha-907658 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m52s (x8 over 5m53s)  kubelet          Node ha-907658 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m52s (x8 over 5m53s)  kubelet          Node ha-907658 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  RegisteredNode           5m33s                  node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  Starting                 4m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m16s (x8 over 4m16s)  kubelet          Node ha-907658 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m16s (x8 over 4m16s)  kubelet          Node ha-907658 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m16s (x8 over 4m16s)  kubelet          Node ha-907658 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	
	
	Name:               ha-907658-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-907658-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=ha-907658
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_07T23_07_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:07:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-907658-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:16:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:16:08 +0000   Sun, 07 Dec 2025 23:07:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:16:08 +0000   Sun, 07 Dec 2025 23:07:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:16:08 +0000   Sun, 07 Dec 2025 23:07:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:16:08 +0000   Sun, 07 Dec 2025 23:12:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-907658-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                c4423b9c-a5a3-462a-aa6c-dc14a3add1e7
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-sd5gw                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m3s
	  kube-system                 coredns-66bc5c9577-7lkd8                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m28s
	  kube-system                 coredns-66bc5c9577-j9lqh                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m28s
	  kube-system                 etcd-ha-907658-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m9s
	  kube-system                 kindnet-wvnmz                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m10s
	  kube-system                 kube-apiserver-ha-907658-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m9s
	  kube-system                 kube-controller-manager-ha-907658-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m9s
	  kube-system                 kube-proxy-sdhd8                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	  kube-system                 kube-scheduler-ha-907658-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m9s
	  kube-system                 kube-vip-ha-907658-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m8s                   kube-proxy       
	  Normal  Starting                 5m38s                  kube-proxy       
	  Normal  Starting                 9m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     9m13s (x8 over 9m13s)  kubelet          Node ha-907658-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    9m13s (x8 over 9m13s)  kubelet          Node ha-907658-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  9m13s (x8 over 9m13s)  kubelet          Node ha-907658-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           9m10s                  node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  RegisteredNode           9m8s                   node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  RegisteredNode           8m38s                  node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  NodeHasSufficientPID     6m55s (x8 over 6m55s)  kubelet          Node ha-907658-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m55s (x8 over 6m55s)  kubelet          Node ha-907658-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m55s (x8 over 6m55s)  kubelet          Node ha-907658-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m55s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           6m49s                  node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  Starting                 5m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m51s (x8 over 5m51s)  kubelet          Node ha-907658-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m51s (x8 over 5m51s)  kubelet          Node ha-907658-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m51s (x8 over 5m51s)  kubelet          Node ha-907658-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  RegisteredNode           5m33s                  node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  Starting                 4m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m15s (x8 over 4m15s)  kubelet          Node ha-907658-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m15s (x8 over 4m15s)  kubelet          Node ha-907658-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m15s (x8 over 4m15s)  kubelet          Node ha-907658-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	
	
	Name:               ha-907658-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-907658-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=ha-907658
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_07T23_08_29_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:08:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-907658-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:16:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:15:55 +0000   Sun, 07 Dec 2025 23:08:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:15:55 +0000   Sun, 07 Dec 2025 23:08:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:15:55 +0000   Sun, 07 Dec 2025 23:08:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:15:55 +0000   Sun, 07 Dec 2025 23:08:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-907658-m04
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                f80b86e6-d691-401f-8493-d6f45994affe
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9rqhs       100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m46s
	  kube-system                 kube-proxy-b8vz9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m6s                   kube-proxy       
	  Normal  Starting                 7m43s                  kube-proxy       
	  Normal  Starting                 3m39s                  kube-proxy       
	  Normal  NodeHasSufficientPID     7m46s (x3 over 7m46s)  kubelet          Node ha-907658-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    7m46s (x3 over 7m46s)  kubelet          Node ha-907658-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m46s (x3 over 7m46s)  kubelet          Node ha-907658-m04 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           7m45s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  RegisteredNode           7m43s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  RegisteredNode           7m43s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  NodeReady                7m33s                  kubelet          Node ha-907658-m04 status is now: NodeReady
	  Normal  RegisteredNode           6m49s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  RegisteredNode           5m33s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  Starting                 5m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m22s (x8 over 5m25s)  kubelet          Node ha-907658-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s (x8 over 5m25s)  kubelet          Node ha-907658-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m22s (x8 over 5m25s)  kubelet          Node ha-907658-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  Starting                 4m7s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m4s (x8 over 4m7s)    kubelet          Node ha-907658-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x8 over 4m7s)    kubelet          Node ha-907658-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x8 over 4m7s)    kubelet          Node ha-907658-m04 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.006452] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494895] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006224] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494897] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.005623] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.496066] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.005917] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[Dec 7 23:16] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.005986] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.495337] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006100] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494663] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.005540] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.496122] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.005022] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.496083] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.004265] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.497368] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.004145] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.496882] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.004333] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.496983] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.004653] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.496735] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.003847] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	
	
	==> etcd [3102169518f14fb026edc01e1247ff4c2edc1292fb8d6ddab3310dc29262b65d] <==
	{"level":"warn","ts":"2025-12-07T23:12:02.189592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.196728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.215628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.224754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.237034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.246470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.252727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.261173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.271732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.278843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.288369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.296456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.305017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.312949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.321771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.329387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.336384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.348809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.354004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.362664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.369994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.387625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.392081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.399402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.408031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45760","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:16:15 up  1:58,  0 user,  load average: 0.36, 1.17, 1.53
	Linux ha-907658 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6e24622fde46e804a62af01a0bc9c1984d71da811c0cb4227298bc171e53fbb1] <==
	I1207 23:15:33.828569       1 main.go:324] Node ha-907658-m04 has CIDR [10.244.3.0/24] 
	I1207 23:15:43.828178       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1207 23:15:43.828259       1 main.go:324] Node ha-907658-m02 has CIDR [10.244.1.0/24] 
	I1207 23:15:43.828889       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1207 23:15:43.828920       1 main.go:324] Node ha-907658-m04 has CIDR [10.244.3.0/24] 
	I1207 23:15:43.829151       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:15:43.829177       1 main.go:301] handling current node
	I1207 23:15:53.829598       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:15:53.829646       1 main.go:301] handling current node
	I1207 23:15:53.829666       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1207 23:15:53.829673       1 main.go:324] Node ha-907658-m02 has CIDR [10.244.1.0/24] 
	I1207 23:15:53.829901       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1207 23:15:53.829912       1 main.go:324] Node ha-907658-m04 has CIDR [10.244.3.0/24] 
	I1207 23:16:03.827509       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1207 23:16:03.827541       1 main.go:324] Node ha-907658-m04 has CIDR [10.244.3.0/24] 
	I1207 23:16:03.827716       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:16:03.827727       1 main.go:301] handling current node
	I1207 23:16:03.827738       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1207 23:16:03.827742       1 main.go:324] Node ha-907658-m02 has CIDR [10.244.1.0/24] 
	I1207 23:16:13.832500       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:16:13.832542       1 main.go:301] handling current node
	I1207 23:16:13.832563       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1207 23:16:13.832569       1 main.go:324] Node ha-907658-m02 has CIDR [10.244.1.0/24] 
	I1207 23:16:13.832817       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1207 23:16:13.832843       1 main.go:324] Node ha-907658-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [87abab3f9975c7d1ffa51c90a94a832599db31aa8d9e2e4cdcccfa593c87020f] <==
	I1207 23:12:03.040289       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1207 23:12:03.040464       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1207 23:12:03.040505       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1207 23:12:03.040770       1 aggregator.go:171] initial CRD sync complete...
	I1207 23:12:03.040809       1 autoregister_controller.go:144] Starting autoregister controller
	I1207 23:12:03.040832       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1207 23:12:03.040883       1 cache.go:39] Caches are synced for autoregister controller
	I1207 23:12:03.041299       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1207 23:12:03.041943       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1207 23:12:03.042481       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1207 23:12:03.042740       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1207 23:12:03.049189       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1207 23:12:03.051184       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1207 23:12:03.058680       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1207 23:12:03.058715       1 policy_source.go:240] refreshing policies
	E1207 23:12:03.062917       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1207 23:12:03.092652       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 23:12:03.204088       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 23:12:03.945462       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1207 23:12:04.372374       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1207 23:12:04.373818       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 23:12:04.380398       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:12:06.632914       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 23:12:06.742193       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 23:12:06.884554       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [86601d9f6ba07c5cc957fcd84ee14c9ed14e0f86e2c332659c8fd9ca9c473cdd] <==
	I1207 23:12:06.403290       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1207 23:12:26.377390       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	E1207 23:12:26.377430       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	E1207 23:12:26.377438       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	E1207 23:12:26.377446       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	E1207 23:12:26.377453       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	E1207 23:12:46.377569       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	E1207 23:12:46.377609       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	E1207 23:12:46.377617       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	E1207 23:12:46.377626       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	E1207 23:12:46.377632       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	I1207 23:12:46.388648       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-5lg58"
	I1207 23:12:46.410719       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-5lg58"
	I1207 23:12:46.411071       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-907658-m03"
	I1207 23:12:46.433046       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-907658-m03"
	I1207 23:12:46.433163       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-907658-m03"
	I1207 23:12:46.454493       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-907658-m03"
	I1207 23:12:46.454614       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-8fwsf"
	I1207 23:12:46.480073       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-8fwsf"
	I1207 23:12:46.480362       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-907658-m03"
	I1207 23:12:46.506233       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-907658-m03"
	I1207 23:12:46.506270       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-907658-m03"
	I1207 23:12:46.539150       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-907658-m03"
	I1207 23:12:46.539211       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-907658-m03"
	I1207 23:12:46.557024       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-907658-m03"
	
	
	==> kube-proxy [b66756d6bf8454e51e71c9a010e9f000c2d6f65f4202832cc7a3a3bf546e9566] <==
	I1207 23:12:03.463144       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:12:03.526682       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 23:12:03.627174       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 23:12:03.627210       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 23:12:03.627301       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:12:03.644894       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:12:03.644940       1 server_linux.go:132] "Using iptables Proxier"
	I1207 23:12:03.650181       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:12:03.650669       1 server.go:527] "Version info" version="v1.34.2"
	I1207 23:12:03.650718       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:12:03.653161       1 config.go:200] "Starting service config controller"
	I1207 23:12:03.653188       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:12:03.653219       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:12:03.653225       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:12:03.653244       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:12:03.653256       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:12:03.653346       1 config.go:309] "Starting node config controller"
	I1207 23:12:03.653353       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:12:03.653366       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:12:03.753518       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 23:12:03.753552       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 23:12:03.753868       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [db1d97b6874004dcfa1bfc301e8470ac6e8ab810f5002178c4d64e0899af2340] <==
	I1207 23:11:59.847303       1 serving.go:386] Generated self-signed cert in-memory
	I1207 23:12:03.025213       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1207 23:12:03.025271       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:12:03.035813       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1207 23:12:03.035844       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:12:03.035857       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 23:12:03.035870       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:12:03.035870       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1207 23:12:03.035879       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 23:12:03.036226       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 23:12:03.036552       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 23:12:03.136624       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 23:12:03.136650       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:12:03.136707       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Dec 07 23:12:00 ha-907658 kubelet[746]: E1207 23:12:00.081635     746 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-907658\" not found" node="ha-907658"
	Dec 07 23:12:01 ha-907658 kubelet[746]: E1207 23:12:01.083780     746 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-907658\" not found" node="ha-907658"
	Dec 07 23:12:01 ha-907658 kubelet[746]: E1207 23:12:01.083932     746 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-907658\" not found" node="ha-907658"
	Dec 07 23:12:01 ha-907658 kubelet[746]: E1207 23:12:01.084030     746 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-907658\" not found" node="ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.042925     746 apiserver.go:52] "Watching apiserver"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.045963     746 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: E1207 23:12:03.069383     746 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-ha-907658\" already exists" pod="kube-system/etcd-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.069626     746 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: E1207 23:12:03.087189     746 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-907658\" already exists" pod="kube-system/kube-apiserver-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.091705     746 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.100510     746 kubelet_node_status.go:124] "Node was previously registered" node="ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.100646     746 kubelet_node_status.go:78] "Successfully registered node" node="ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.100685     746 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.101661     746 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 07 23:12:03 ha-907658 kubelet[746]: E1207 23:12:03.104485     746 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-907658\" already exists" pod="kube-system/kube-controller-manager-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.104628     746 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: E1207 23:12:03.115174     746 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-907658\" already exists" pod="kube-system/kube-scheduler-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.115385     746 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: E1207 23:12:03.125044     746 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-vip-ha-907658\" already exists" pod="kube-system/kube-vip-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.146852     746 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.199347     746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0ba957f-b2b5-4e7a-b93a-b3619c1e4cf9-xtables-lock\") pod \"kube-proxy-r5c77\" (UID: \"c0ba957f-b2b5-4e7a-b93a-b3619c1e4cf9\") " pod="kube-system/kube-proxy-r5c77"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.199404     746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c0ef1d7-39de-46ce-b16b-4d2794e7dc20-lib-modules\") pod \"kindnet-hzfvq\" (UID: \"8c0ef1d7-39de-46ce-b16b-4d2794e7dc20\") " pod="kube-system/kindnet-hzfvq"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.200064     746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8c0ef1d7-39de-46ce-b16b-4d2794e7dc20-cni-cfg\") pod \"kindnet-hzfvq\" (UID: \"8c0ef1d7-39de-46ce-b16b-4d2794e7dc20\") " pod="kube-system/kindnet-hzfvq"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.200129     746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c0ef1d7-39de-46ce-b16b-4d2794e7dc20-xtables-lock\") pod \"kindnet-hzfvq\" (UID: \"8c0ef1d7-39de-46ce-b16b-4d2794e7dc20\") " pod="kube-system/kindnet-hzfvq"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.200193     746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0ba957f-b2b5-4e7a-b93a-b3619c1e4cf9-lib-modules\") pod \"kube-proxy-r5c77\" (UID: \"c0ba957f-b2b5-4e7a-b93a-b3619c1e4cf9\") " pod="kube-system/kube-proxy-r5c77"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-907658 -n ha-907658
helpers_test.go:269: (dbg) Run:  kubectl --context ha-907658 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (263.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-907658" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-907658\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-907658\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.2\",\"ClusterName\":\"ha-907658\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"reg
istry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticI
P\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-907658
helpers_test.go:243: (dbg) docker inspect ha-907658:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b18b557fea95c806a3bf174d1482bc2a7fdb2737b9fcb5b0eeea6e687f5d8adf",
	        "Created": "2025-12-07T23:06:25.641182516Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 487285,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:11:52.946976582Z",
	            "FinishedAt": "2025-12-07T23:11:52.180976562Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/b18b557fea95c806a3bf174d1482bc2a7fdb2737b9fcb5b0eeea6e687f5d8adf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b18b557fea95c806a3bf174d1482bc2a7fdb2737b9fcb5b0eeea6e687f5d8adf/hostname",
	        "HostsPath": "/var/lib/docker/containers/b18b557fea95c806a3bf174d1482bc2a7fdb2737b9fcb5b0eeea6e687f5d8adf/hosts",
	        "LogPath": "/var/lib/docker/containers/b18b557fea95c806a3bf174d1482bc2a7fdb2737b9fcb5b0eeea6e687f5d8adf/b18b557fea95c806a3bf174d1482bc2a7fdb2737b9fcb5b0eeea6e687f5d8adf-json.log",
	        "Name": "/ha-907658",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-907658:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-907658",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b18b557fea95c806a3bf174d1482bc2a7fdb2737b9fcb5b0eeea6e687f5d8adf",
	                "LowerDir": "/var/lib/docker/overlay2/95f4d37acd9603eb9082e08eb2b25d1d911e5a215fb4e71b00c8c77b90dafbc3-init/diff:/var/lib/docker/overlay2/d2e9c5481c0f5ed3745e4b3c85b207e8e3f273f5a1d285f7bc7bfa20976ad16e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/95f4d37acd9603eb9082e08eb2b25d1d911e5a215fb4e71b00c8c77b90dafbc3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/95f4d37acd9603eb9082e08eb2b25d1d911e5a215fb4e71b00c8c77b90dafbc3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/95f4d37acd9603eb9082e08eb2b25d1d911e5a215fb4e71b00c8c77b90dafbc3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-907658",
	                "Source": "/var/lib/docker/volumes/ha-907658/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-907658",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-907658",
	                "name.minikube.sigs.k8s.io": "ha-907658",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ba5e035333284e7ec191aa45f8e8f710a1211614ee9390e57a685e532fd2b7d0",
	            "SandboxKey": "/var/run/docker/netns/ba5e03533328",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33213"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33214"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33217"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33215"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33216"
	                    }
	                ]
	            },
	            "Networks": {
	                "ha-907658": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "918c8f4f6e86f6f20607e87a6beb39a8a1d64cc9183e3317d1968551e79c40e2",
	                    "EndpointID": "39156e34f46c5c2dd2e2dd90a72a9e93d4aca46c4dae46d6dd8bcd5fd820e723",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "d2:5b:58:4b:cd:fa",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-907658",
	                        "b18b557fea95"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-907658 -n ha-907658
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-907658 logs -n 25: (1.005784247s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-907658 cp ha-907658-m03:/home/docker/cp-test.txt ha-907658-m04:/home/docker/cp-test_ha-907658-m03_ha-907658-m04.txt               │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m04 sudo cat /home/docker/cp-test_ha-907658-m03_ha-907658-m04.txt                                         │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ cp      │ ha-907658 cp testdata/cp-test.txt ha-907658-m04:/home/docker/cp-test.txt                                                             │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ cp      │ ha-907658 cp ha-907658-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2786965912/001/cp-test_ha-907658-m04.txt │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ cp      │ ha-907658 cp ha-907658-m04:/home/docker/cp-test.txt ha-907658:/home/docker/cp-test_ha-907658-m04_ha-907658.txt                       │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658 sudo cat /home/docker/cp-test_ha-907658-m04_ha-907658.txt                                                 │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ cp      │ ha-907658 cp ha-907658-m04:/home/docker/cp-test.txt ha-907658-m02:/home/docker/cp-test_ha-907658-m04_ha-907658-m02.txt               │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m02 sudo cat /home/docker/cp-test_ha-907658-m04_ha-907658-m02.txt                                         │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ cp      │ ha-907658 cp ha-907658-m04:/home/docker/cp-test.txt ha-907658-m03:/home/docker/cp-test_ha-907658-m04_ha-907658-m03.txt               │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m03 sudo cat /home/docker/cp-test_ha-907658-m04_ha-907658-m03.txt                                         │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ node    │ ha-907658 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ node    │ ha-907658 node start m02 --alsologtostderr -v 5                                                                                      │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ node    │ ha-907658 node list --alsologtostderr -v 5                                                                                           │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │                     │
	│ stop    │ ha-907658 stop --alsologtostderr -v 5                                                                                                │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:10 UTC │
	│ start   │ ha-907658 start --wait true --alsologtostderr -v 5                                                                                   │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:11 UTC │
	│ node    │ ha-907658 node list --alsologtostderr -v 5                                                                                           │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:11 UTC │                     │
	│ node    │ ha-907658 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:11 UTC │ 07 Dec 25 23:11 UTC │
	│ stop    │ ha-907658 stop --alsologtostderr -v 5                                                                                                │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:11 UTC │ 07 Dec 25 23:11 UTC │
	│ start   │ ha-907658 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:11 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:11:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:11:52.723208  487084 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:11:52.723342  487084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:11:52.723354  487084 out.go:374] Setting ErrFile to fd 2...
	I1207 23:11:52.723361  487084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:11:52.723559  487084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:11:52.724064  487084 out.go:368] Setting JSON to false
	I1207 23:11:52.725035  487084 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6857,"bootTime":1765142256,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:11:52.725102  487084 start.go:143] virtualization: kvm guest
	I1207 23:11:52.726965  487084 out.go:179] * [ha-907658] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:11:52.728170  487084 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:11:52.728167  487084 notify.go:221] Checking for updates...
	I1207 23:11:52.730209  487084 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:11:52.731286  487084 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:11:52.732435  487084 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:11:52.733509  487084 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:11:52.734621  487084 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:11:52.736265  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:11:52.736931  487084 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:11:52.761948  487084 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:11:52.762088  487084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:11:52.815796  487084 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-07 23:11:52.805859782 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:11:52.815895  487084 docker.go:319] overlay module found
	I1207 23:11:52.818644  487084 out.go:179] * Using the docker driver based on existing profile
	I1207 23:11:52.819812  487084 start.go:309] selected driver: docker
	I1207 23:11:52.819828  487084 start.go:927] validating driver "docker" against &{Name:ha-907658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:11:52.819961  487084 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:11:52.820059  487084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:11:52.873900  487084 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-07 23:11:52.864641727 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:11:52.874579  487084 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:11:52.874614  487084 cni.go:84] Creating CNI manager for ""
	I1207 23:11:52.874670  487084 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1207 23:11:52.874722  487084 start.go:353] cluster config:
	{Name:ha-907658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:11:52.876967  487084 out.go:179] * Starting "ha-907658" primary control-plane node in "ha-907658" cluster
	I1207 23:11:52.877923  487084 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:11:52.878975  487084 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:11:52.880201  487084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:11:52.880231  487084 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1207 23:11:52.880239  487084 cache.go:65] Caching tarball of preloaded images
	I1207 23:11:52.880300  487084 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:11:52.880362  487084 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:11:52.880377  487084 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1207 23:11:52.880537  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:11:52.900771  487084 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:11:52.900792  487084 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:11:52.900810  487084 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:11:52.900849  487084 start.go:360] acquireMachinesLock for ha-907658: {Name:mkd7016770bc40ef9cd544023d232b92bc7cf832 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:11:52.900927  487084 start.go:364] duration metric: took 42.672µs to acquireMachinesLock for "ha-907658"
	I1207 23:11:52.900952  487084 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:11:52.900961  487084 fix.go:54] fixHost starting: 
	I1207 23:11:52.901168  487084 cli_runner.go:164] Run: docker container inspect ha-907658 --format={{.State.Status}}
	I1207 23:11:52.918459  487084 fix.go:112] recreateIfNeeded on ha-907658: state=Stopped err=<nil>
	W1207 23:11:52.918485  487084 fix.go:138] unexpected machine state, will restart: <nil>
	I1207 23:11:52.920300  487084 out.go:252] * Restarting existing docker container for "ha-907658" ...
	I1207 23:11:52.920381  487084 cli_runner.go:164] Run: docker start ha-907658
	I1207 23:11:53.154762  487084 cli_runner.go:164] Run: docker container inspect ha-907658 --format={{.State.Status}}
	I1207 23:11:53.172884  487084 kic.go:430] container "ha-907658" state is running.
	I1207 23:11:53.173368  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658
	I1207 23:11:53.192850  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:11:53.193082  487084 machine.go:94] provisionDockerMachine start ...
	I1207 23:11:53.193169  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:53.211683  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:11:53.211988  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1207 23:11:53.212008  487084 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:11:53.212567  487084 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40796->127.0.0.1:33213: read: connection reset by peer
	I1207 23:11:56.342986  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658
	
	I1207 23:11:56.343016  487084 ubuntu.go:182] provisioning hostname "ha-907658"
	I1207 23:11:56.343087  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:56.361678  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:11:56.361914  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1207 23:11:56.361928  487084 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-907658 && echo "ha-907658" | sudo tee /etc/hostname
	I1207 23:11:56.498208  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658
	
	I1207 23:11:56.498287  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:56.517144  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:11:56.517409  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1207 23:11:56.517428  487084 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-907658' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-907658/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-907658' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:11:56.645103  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:11:56.645138  487084 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:11:56.645173  487084 ubuntu.go:190] setting up certificates
	I1207 23:11:56.645187  487084 provision.go:84] configureAuth start
	I1207 23:11:56.645254  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658
	I1207 23:11:56.663482  487084 provision.go:143] copyHostCerts
	I1207 23:11:56.663535  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:11:56.663565  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:11:56.663574  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:11:56.663652  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:11:56.663767  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:11:56.663794  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:11:56.663802  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:11:56.663845  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:11:56.663928  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:11:56.663951  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:11:56.663961  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:11:56.663999  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:11:56.664154  487084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.ha-907658 san=[127.0.0.1 192.168.49.2 ha-907658 localhost minikube]
	I1207 23:11:56.859476  487084 provision.go:177] copyRemoteCerts
	I1207 23:11:56.859539  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:11:56.859583  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:56.877854  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:11:56.971727  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1207 23:11:56.971784  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1207 23:11:56.989675  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1207 23:11:56.989726  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 23:11:57.006645  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1207 23:11:57.006699  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:11:57.024214  487084 provision.go:87] duration metric: took 379.007514ms to configureAuth
	I1207 23:11:57.024242  487084 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:11:57.024505  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:11:57.024648  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:57.043106  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:11:57.043322  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1207 23:11:57.043362  487084 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:11:57.351275  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:11:57.351301  487084 machine.go:97] duration metric: took 4.158205159s to provisionDockerMachine
	I1207 23:11:57.351316  487084 start.go:293] postStartSetup for "ha-907658" (driver="docker")
	I1207 23:11:57.351345  487084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:11:57.351414  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:11:57.351463  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:57.370902  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:11:57.463959  487084 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:11:57.467550  487084 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:11:57.467577  487084 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:11:57.467590  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:11:57.467657  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:11:57.467762  487084 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:11:57.467778  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /etc/ssl/certs/3931252.pem
	I1207 23:11:57.467888  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:11:57.475351  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:11:57.492383  487084 start.go:296] duration metric: took 141.051455ms for postStartSetup
	I1207 23:11:57.492490  487084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:11:57.492538  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:57.510719  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:11:57.601727  487084 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:11:57.606180  487084 fix.go:56] duration metric: took 4.705212142s for fixHost
	I1207 23:11:57.606209  487084 start.go:83] releasing machines lock for "ha-907658", held for 4.705267868s
	I1207 23:11:57.606320  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658
	I1207 23:11:57.624104  487084 ssh_runner.go:195] Run: cat /version.json
	I1207 23:11:57.624182  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:57.624209  487084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:11:57.624294  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:57.642922  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:11:57.643662  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:11:57.785793  487084 ssh_runner.go:195] Run: systemctl --version
	I1207 23:11:57.792308  487084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:11:57.826743  487084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:11:57.831572  487084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:11:57.831644  487084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:11:57.839631  487084 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 23:11:57.839653  487084 start.go:496] detecting cgroup driver to use...
	I1207 23:11:57.839690  487084 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:11:57.839733  487084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:11:57.853650  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:11:57.866122  487084 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:11:57.866194  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:11:57.880612  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:11:57.893020  487084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:11:57.971718  487084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:11:58.051170  487084 docker.go:234] disabling docker service ...
	I1207 23:11:58.051240  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:11:58.065815  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:11:58.078071  487084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:11:58.159158  487084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:11:58.241617  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:11:58.253808  487084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:11:58.267810  487084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:11:58.267865  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.276619  487084 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:11:58.276694  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.285159  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.293362  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.301983  487084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:11:58.310270  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.319027  487084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.327563  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.336683  487084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:11:58.344663  487084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:11:58.352591  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:11:58.430723  487084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:11:58.561670  487084 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:11:58.561748  487084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:11:58.565839  487084 start.go:564] Will wait 60s for crictl version
	I1207 23:11:58.565925  487084 ssh_runner.go:195] Run: which crictl
	I1207 23:11:58.569353  487084 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:11:58.593853  487084 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:11:58.593949  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:11:58.621201  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:11:58.650380  487084 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1207 23:11:58.651543  487084 cli_runner.go:164] Run: docker network inspect ha-907658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:11:58.669539  487084 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1207 23:11:58.673718  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:11:58.684392  487084 kubeadm.go:884] updating cluster {Name:ha-907658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:11:58.684550  487084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:11:58.684610  487084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:11:58.716893  487084 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:11:58.716915  487084 crio.go:433] Images already preloaded, skipping extraction
	I1207 23:11:58.717012  487084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:11:58.743428  487084 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:11:58.743474  487084 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:11:58.743483  487084 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1207 23:11:58.743593  487084 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-907658 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:11:58.743655  487084 ssh_runner.go:195] Run: crio config
	I1207 23:11:58.789302  487084 cni.go:84] Creating CNI manager for ""
	I1207 23:11:58.789345  487084 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1207 23:11:58.789368  487084 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 23:11:58.789396  487084 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-907658 NodeName:ha-907658 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:11:58.789521  487084 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-907658"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:11:58.789548  487084 kube-vip.go:115] generating kube-vip config ...
	I1207 23:11:58.789589  487084 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1207 23:11:58.801884  487084 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:11:58.802014  487084 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1207 23:11:58.802092  487084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:11:58.809827  487084 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:11:58.809897  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1207 23:11:58.817290  487084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1207 23:11:58.829895  487084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:11:58.842148  487084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1207 23:11:58.854128  487084 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1207 23:11:58.866494  487084 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1207 23:11:58.870208  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:11:58.879832  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:11:58.957062  487084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:11:58.981696  487084 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658 for IP: 192.168.49.2
	I1207 23:11:58.981720  487084 certs.go:195] generating shared ca certs ...
	I1207 23:11:58.981747  487084 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:58.981923  487084 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:11:58.981976  487084 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:11:58.981990  487084 certs.go:257] generating profile certs ...
	I1207 23:11:58.982095  487084 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key
	I1207 23:11:58.982127  487084 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key.be52f8f7
	I1207 23:11:58.982147  487084 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt.be52f8f7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1207 23:11:59.053446  487084 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt.be52f8f7 ...
	I1207 23:11:59.053484  487084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt.be52f8f7: {Name:mkde9a77ed2ccf374bbd7ef2ab8471222e930ca7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:59.053683  487084 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key.be52f8f7 ...
	I1207 23:11:59.053700  487084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key.be52f8f7: {Name:mkf9f5e1f2966de715814128c39c83c05472c22e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:59.053837  487084 certs.go:382] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt.be52f8f7 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt
	I1207 23:11:59.054023  487084 certs.go:386] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key.be52f8f7 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key
	I1207 23:11:59.054208  487084 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key
	I1207 23:11:59.054223  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1207 23:11:59.054240  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1207 23:11:59.054254  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1207 23:11:59.054268  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1207 23:11:59.054285  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1207 23:11:59.054298  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1207 23:11:59.054315  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1207 23:11:59.054346  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1207 23:11:59.054449  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:11:59.054492  487084 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:11:59.054503  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:11:59.054539  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:11:59.054597  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:11:59.054627  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:11:59.054683  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:11:59.054723  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem -> /usr/share/ca-certificates/393125.pem
	I1207 23:11:59.054754  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /usr/share/ca-certificates/3931252.pem
	I1207 23:11:59.054767  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:11:59.055522  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:11:59.076096  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:11:59.092913  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:11:59.110126  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:11:59.126855  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1207 23:11:59.143407  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 23:11:59.160896  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:11:59.178517  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 23:11:59.196273  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:11:59.213156  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:11:59.230319  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:11:59.247989  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:11:59.259981  487084 ssh_runner.go:195] Run: openssl version
	I1207 23:11:59.265807  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:11:59.273185  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:11:59.280496  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:11:59.284023  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:11:59.284068  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:11:59.318047  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:11:59.325928  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:11:59.332951  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:11:59.340016  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:11:59.343716  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:11:59.343772  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:11:59.377866  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:11:59.386064  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:11:59.393852  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:11:59.401598  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:11:59.405548  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:11:59.405622  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:11:59.439621  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:11:59.447485  487084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:11:59.451341  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 23:11:59.493084  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 23:11:59.535906  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 23:11:59.583567  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 23:11:59.642172  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 23:11:59.681845  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 23:11:59.717892  487084 kubeadm.go:401] StartCluster: {Name:ha-907658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:11:59.718040  487084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:11:59.718122  487084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:11:59.750509  487084 cri.go:89] found id: "86601d9f6ba07c5cc957fcd84ee14c9ed14e0f86e2c332659c8fd9ca9c473cdd"
	I1207 23:11:59.750537  487084 cri.go:89] found id: "3102169518f14fb026edc01e1247ff4c2edc1292fb8d6ddab3310dc29262b65d"
	I1207 23:11:59.750543  487084 cri.go:89] found id: "87abab3f9975c7d1ffa51c90a94a832599db31aa8d9e2e4cdcccfa593c87020f"
	I1207 23:11:59.750548  487084 cri.go:89] found id: "db1d97b6874004dcfa1bfc301e8470ac6e8ab810f5002178c4d64e0899af2340"
	I1207 23:11:59.750560  487084 cri.go:89] found id: "04ab6dc0a72c2fd9ce998abf808c8139e9d16737d96e3dc5573726403cfba770"
	I1207 23:11:59.750567  487084 cri.go:89] found id: ""
	I1207 23:11:59.750620  487084 ssh_runner.go:195] Run: sudo runc list -f json
	W1207 23:11:59.763116  487084 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:11:59Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:11:59.763191  487084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:11:59.771453  487084 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1207 23:11:59.771471  487084 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1207 23:11:59.771524  487084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 23:11:59.778977  487084 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:11:59.779462  487084 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-907658" does not appear in /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:11:59.779590  487084 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-389542/kubeconfig needs updating (will repair): [kubeconfig missing "ha-907658" cluster setting kubeconfig missing "ha-907658" context setting]
	I1207 23:11:59.780044  487084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:59.780730  487084 kapi.go:59] client config for ha-907658: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key", CAFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 23:11:59.781268  487084 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1207 23:11:59.781286  487084 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1207 23:11:59.781293  487084 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1207 23:11:59.781300  487084 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1207 23:11:59.781318  487084 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1207 23:11:59.781314  487084 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1207 23:11:59.781841  487084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 23:11:59.790236  487084 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1207 23:11:59.790262  487084 kubeadm.go:602] duration metric: took 18.784379ms to restartPrimaryControlPlane
	I1207 23:11:59.790272  487084 kubeadm.go:403] duration metric: took 72.393488ms to StartCluster
	I1207 23:11:59.790292  487084 settings.go:142] acquiring lock: {Name:mk372e79badb9c8f25216fa891cff6dfa96ea2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:59.790408  487084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:11:59.791175  487084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:59.791433  487084 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:11:59.791463  487084 start.go:242] waiting for startup goroutines ...
	I1207 23:11:59.791480  487084 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1207 23:11:59.791743  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:11:59.794127  487084 out.go:179] * Enabled addons: 
	I1207 23:11:59.795136  487084 addons.go:530] duration metric: took 3.661252ms for enable addons: enabled=[]
	I1207 23:11:59.795167  487084 start.go:247] waiting for cluster config update ...
	I1207 23:11:59.795178  487084 start.go:256] writing updated cluster config ...
	I1207 23:11:59.796468  487084 out.go:203] 
	I1207 23:11:59.797620  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:11:59.797739  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:11:59.799011  487084 out.go:179] * Starting "ha-907658-m02" control-plane node in "ha-907658" cluster
	I1207 23:11:59.799852  487084 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:11:59.800858  487084 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:11:59.801718  487084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:11:59.801733  487084 cache.go:65] Caching tarball of preloaded images
	I1207 23:11:59.801784  487084 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:11:59.801821  487084 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:11:59.801834  487084 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1207 23:11:59.801944  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:11:59.823527  487084 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:11:59.823550  487084 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:11:59.823570  487084 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:11:59.823603  487084 start.go:360] acquireMachinesLock for ha-907658-m02: {Name:mk6484dd4dfe7ba137d5f583543a1831d27edba5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:11:59.823673  487084 start.go:364] duration metric: took 49.067µs to acquireMachinesLock for "ha-907658-m02"
	I1207 23:11:59.823696  487084 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:11:59.823702  487084 fix.go:54] fixHost starting: m02
	I1207 23:11:59.823927  487084 cli_runner.go:164] Run: docker container inspect ha-907658-m02 --format={{.State.Status}}
	I1207 23:11:59.844560  487084 fix.go:112] recreateIfNeeded on ha-907658-m02: state=Stopped err=<nil>
	W1207 23:11:59.844589  487084 fix.go:138] unexpected machine state, will restart: <nil>
	I1207 23:11:59.846377  487084 out.go:252] * Restarting existing docker container for "ha-907658-m02" ...
	I1207 23:11:59.846453  487084 cli_runner.go:164] Run: docker start ha-907658-m02
	I1207 23:12:00.130224  487084 cli_runner.go:164] Run: docker container inspect ha-907658-m02 --format={{.State.Status}}
	I1207 23:12:00.155491  487084 kic.go:430] container "ha-907658-m02" state is running.
	I1207 23:12:00.155911  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m02
	I1207 23:12:00.178281  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:12:00.178573  487084 machine.go:94] provisionDockerMachine start ...
	I1207 23:12:00.178649  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:00.198614  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:00.198945  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1207 23:12:00.198960  487084 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:12:00.199661  487084 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38884->127.0.0.1:33218: read: connection reset by peer
	I1207 23:12:03.333342  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658-m02
	
	I1207 23:12:03.333382  487084 ubuntu.go:182] provisioning hostname "ha-907658-m02"
	I1207 23:12:03.333446  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:03.352148  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:03.352463  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1207 23:12:03.352484  487084 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-907658-m02 && echo "ha-907658-m02" | sudo tee /etc/hostname
	I1207 23:12:03.505996  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658-m02
	
	I1207 23:12:03.506086  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:03.523096  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:03.523409  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1207 23:12:03.523430  487084 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-907658-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-907658-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-907658-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:12:03.654538  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:12:03.654571  487084 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:12:03.654593  487084 ubuntu.go:190] setting up certificates
	I1207 23:12:03.654607  487084 provision.go:84] configureAuth start
	I1207 23:12:03.654667  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m02
	I1207 23:12:03.678200  487084 provision.go:143] copyHostCerts
	I1207 23:12:03.678248  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:12:03.678285  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:12:03.678297  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:12:03.678397  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:12:03.678500  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:12:03.678535  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:12:03.678546  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:12:03.678587  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:12:03.678657  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:12:03.678682  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:12:03.678690  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:12:03.678715  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:12:03.678770  487084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.ha-907658-m02 san=[127.0.0.1 192.168.49.3 ha-907658-m02 localhost minikube]
	I1207 23:12:03.790264  487084 provision.go:177] copyRemoteCerts
	I1207 23:12:03.790352  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:12:03.790402  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:03.823101  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m02/id_rsa Username:docker}
	I1207 23:12:03.924465  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1207 23:12:03.924539  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:12:03.944485  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1207 23:12:03.944556  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1207 23:12:03.968961  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1207 23:12:03.969036  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 23:12:03.995367  487084 provision.go:87] duration metric: took 340.743667ms to configureAuth
	I1207 23:12:03.995400  487084 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:12:03.995657  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:03.995779  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:04.026533  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:04.026857  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1207 23:12:04.026885  487084 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:12:04.415911  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:12:04.415941  487084 machine.go:97] duration metric: took 4.237351611s to provisionDockerMachine
	I1207 23:12:04.415957  487084 start.go:293] postStartSetup for "ha-907658-m02" (driver="docker")
	I1207 23:12:04.415971  487084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:12:04.416028  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:12:04.416078  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:04.434685  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m02/id_rsa Username:docker}
	I1207 23:12:04.530207  487084 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:12:04.533967  487084 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:12:04.533999  487084 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:12:04.534014  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:12:04.534066  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:12:04.534139  487084 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:12:04.534149  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /etc/ssl/certs/3931252.pem
	I1207 23:12:04.534230  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:12:04.542117  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:12:04.560472  487084 start.go:296] duration metric: took 144.495639ms for postStartSetup
	I1207 23:12:04.560570  487084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:12:04.560625  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:04.577649  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m02/id_rsa Username:docker}
	I1207 23:12:04.669363  487084 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:12:04.674346  487084 fix.go:56] duration metric: took 4.85062394s for fixHost
	I1207 23:12:04.674372  487084 start.go:83] releasing machines lock for "ha-907658-m02", held for 4.850686194s
	I1207 23:12:04.674436  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m02
	I1207 23:12:04.693901  487084 out.go:179] * Found network options:
	I1207 23:12:04.695122  487084 out.go:179]   - NO_PROXY=192.168.49.2
	W1207 23:12:04.696299  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	W1207 23:12:04.696348  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	I1207 23:12:04.696432  487084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:12:04.696482  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:04.696491  487084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:12:04.696545  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:04.715832  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m02/id_rsa Username:docker}
	I1207 23:12:04.716229  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m02/id_rsa Username:docker}
	I1207 23:12:04.880414  487084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:12:04.885363  487084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:12:04.885437  487084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:12:04.893312  487084 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 23:12:04.893347  487084 start.go:496] detecting cgroup driver to use...
	I1207 23:12:04.893386  487084 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:12:04.893433  487084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:12:04.908112  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:12:04.920708  487084 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:12:04.920806  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:12:04.935538  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:12:04.948970  487084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:12:05.093803  487084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:12:05.237498  487084 docker.go:234] disabling docker service ...
	I1207 23:12:05.237578  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:12:05.255362  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:12:05.271477  487084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:12:05.401811  487084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:12:05.532521  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:12:05.547785  487084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:12:05.566033  487084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:12:05.566094  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.577067  487084 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:12:05.577126  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.589050  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.599566  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.609984  487084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:12:05.619430  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.632001  487084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.642199  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.652617  487084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:12:05.661297  487084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:12:05.671605  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:05.817088  487084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:12:06.027922  487084 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:12:06.027991  487084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:12:06.032083  487084 start.go:564] Will wait 60s for crictl version
	I1207 23:12:06.032144  487084 ssh_runner.go:195] Run: which crictl
	I1207 23:12:06.035913  487084 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:12:06.060174  487084 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:12:06.060268  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:12:06.088918  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:12:06.119010  487084 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1207 23:12:06.120321  487084 out.go:179]   - env NO_PROXY=192.168.49.2
	I1207 23:12:06.121801  487084 cli_runner.go:164] Run: docker network inspect ha-907658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:12:06.139719  487084 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1207 23:12:06.143993  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:12:06.155217  487084 mustload.go:66] Loading cluster: ha-907658
	I1207 23:12:06.155433  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:06.155653  487084 cli_runner.go:164] Run: docker container inspect ha-907658 --format={{.State.Status}}
	I1207 23:12:06.173920  487084 host.go:66] Checking if "ha-907658" exists ...
	I1207 23:12:06.174154  487084 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658 for IP: 192.168.49.3
	I1207 23:12:06.174165  487084 certs.go:195] generating shared ca certs ...
	I1207 23:12:06.174179  487084 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:12:06.174311  487084 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:12:06.174381  487084 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:12:06.174397  487084 certs.go:257] generating profile certs ...
	I1207 23:12:06.174493  487084 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key
	I1207 23:12:06.174583  487084 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key.39a0badd
	I1207 23:12:06.174639  487084 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key
	I1207 23:12:06.174654  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1207 23:12:06.174671  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1207 23:12:06.174693  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1207 23:12:06.174708  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1207 23:12:06.174722  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1207 23:12:06.174739  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1207 23:12:06.174753  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1207 23:12:06.174772  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1207 23:12:06.174836  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:12:06.174877  487084 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:12:06.174891  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:12:06.174926  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:12:06.174963  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:12:06.174996  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:12:06.175052  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:12:06.175095  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /usr/share/ca-certificates/3931252.pem
	I1207 23:12:06.175115  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:06.175131  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem -> /usr/share/ca-certificates/393125.pem
	I1207 23:12:06.175194  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:12:06.197420  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:12:06.283673  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1207 23:12:06.290449  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1207 23:12:06.302775  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1207 23:12:06.308469  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1207 23:12:06.317835  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1207 23:12:06.321609  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1207 23:12:06.330066  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1207 23:12:06.333816  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1207 23:12:06.345628  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1207 23:12:06.352380  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1207 23:12:06.360869  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1207 23:12:06.364787  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1207 23:12:06.374104  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:12:06.394705  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:12:06.413194  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:12:06.432115  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:12:06.449406  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1207 23:12:06.466917  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 23:12:06.498654  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:12:06.528737  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 23:12:06.546449  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:12:06.564005  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:12:06.582815  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:12:06.601666  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1207 23:12:06.615105  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1207 23:12:06.631379  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1207 23:12:06.646798  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1207 23:12:06.659864  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1207 23:12:06.675256  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1207 23:12:06.690795  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1207 23:12:06.705444  487084 ssh_runner.go:195] Run: openssl version
	I1207 23:12:06.712063  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:12:06.720029  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:12:06.728834  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:12:06.733304  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:12:06.733391  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:12:06.771128  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:12:06.779038  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:06.787058  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:12:06.794858  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:06.798600  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:06.798662  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:06.834714  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:12:06.842519  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:12:06.849816  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:12:06.857109  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:12:06.860827  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:12:06.860876  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:12:06.901264  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:12:06.909596  487084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:12:06.913535  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 23:12:06.953706  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 23:12:06.990023  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 23:12:07.024365  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 23:12:07.059478  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 23:12:07.093656  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 23:12:07.130433  487084 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1207 23:12:07.130566  487084 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-907658-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:12:07.130596  487084 kube-vip.go:115] generating kube-vip config ...
	I1207 23:12:07.130647  487084 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1207 23:12:07.142960  487084 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:12:07.143037  487084 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1207 23:12:07.143109  487084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:12:07.151538  487084 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:12:07.151608  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1207 23:12:07.159652  487084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1207 23:12:07.172062  487084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:12:07.184591  487084 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1207 23:12:07.197988  487084 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1207 23:12:07.201949  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:12:07.212295  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:07.335873  487084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:12:07.349280  487084 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:12:07.349636  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:07.351992  487084 out.go:179] * Verifying Kubernetes components...
	I1207 23:12:07.353164  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:07.482271  487084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:12:07.495426  487084 kapi.go:59] client config for ha-907658: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key", CAFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1207 23:12:07.495497  487084 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1207 23:12:07.495703  487084 node_ready.go:35] waiting up to 6m0s for node "ha-907658-m02" to be "Ready" ...
	I1207 23:12:07.504809  487084 node_ready.go:49] node "ha-907658-m02" is "Ready"
	I1207 23:12:07.504835  487084 node_ready.go:38] duration metric: took 9.118175ms for node "ha-907658-m02" to be "Ready" ...
	I1207 23:12:07.504849  487084 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:12:07.504891  487084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:12:07.517382  487084 api_server.go:72] duration metric: took 168.030727ms to wait for apiserver process to appear ...
	I1207 23:12:07.517409  487084 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:12:07.517436  487084 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1207 23:12:07.523117  487084 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1207 23:12:07.524187  487084 api_server.go:141] control plane version: v1.34.2
	I1207 23:12:07.524214  487084 api_server.go:131] duration metric: took 6.79771ms to wait for apiserver health ...
	I1207 23:12:07.524224  487084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:12:07.530960  487084 system_pods.go:59] 26 kube-system pods found
	I1207 23:12:07.531007  487084 system_pods.go:61] "coredns-66bc5c9577-7lkd8" [87d8dbef-c05d-4fcd-b08e-4ee6bce689ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:12:07.531030  487084 system_pods.go:61] "coredns-66bc5c9577-j9lqh" [50fb7869-af19-4fe4-a49d-bf8431faa47e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:12:07.531045  487084 system_pods.go:61] "etcd-ha-907658" [a1045f46-63e5-4adf-8cba-698626661685] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:12:07.531055  487084 system_pods.go:61] "etcd-ha-907658-m02" [e0fd4196-c559-4ed5-a866-f2edca5d028b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:12:07.531065  487084 system_pods.go:61] "etcd-ha-907658-m03" [ec660b37-46e0-4ea6-8143-43a215cb208e] Running
	I1207 23:12:07.531077  487084 system_pods.go:61] "kindnet-5lg58" [595946fb-4b57-4869-85e2-75debf3486ae] Running
	I1207 23:12:07.531082  487084 system_pods.go:61] "kindnet-9rqhs" [78003a20-15f9-43e0-8a11-9c215ade326b] Running
	I1207 23:12:07.531086  487084 system_pods.go:61] "kindnet-hzfvq" [8c0ef1d7-39de-46ce-b16b-4d2794e7dc20] Running
	I1207 23:12:07.531090  487084 system_pods.go:61] "kindnet-wvnmz" [464814b4-64d5-4cae-b298-44186fe9b844] Running
	I1207 23:12:07.531102  487084 system_pods.go:61] "kube-apiserver-ha-907658" [746157f2-b5d4-4a22-b0d0-e186dba5c022] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:12:07.531114  487084 system_pods.go:61] "kube-apiserver-ha-907658-m02" [69e1f1f9-cc80-4383-8bf2-cd362ab2fc9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:12:07.531122  487084 system_pods.go:61] "kube-apiserver-ha-907658-m03" [6dd58630-2169-4539-b8eb-d9971aef28c0] Running
	I1207 23:12:07.531128  487084 system_pods.go:61] "kube-controller-manager-ha-907658" [86717111-1edd-4e7d-bd64-87a0b751fd53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:12:07.531132  487084 system_pods.go:61] "kube-controller-manager-ha-907658-m02" [2edf59bb-e62d-4897-9d2f-6a454cc72644] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:12:07.531138  487084 system_pods.go:61] "kube-controller-manager-ha-907658-m03" [87b33e73-dedd-477d-87fa-42e198df84ba] Running
	I1207 23:12:07.531141  487084 system_pods.go:61] "kube-proxy-8fwsf" [1d7267ee-074b-40da-bfe0-4b434d732d8c] Running
	I1207 23:12:07.531147  487084 system_pods.go:61] "kube-proxy-b8vz9" [cd4b68a6-4528-4644-bac6-158d1bffd0ed] Running
	I1207 23:12:07.531150  487084 system_pods.go:61] "kube-proxy-r5c77" [c0ba957f-b2b5-4e7a-b93a-b3619c1e4cf9] Running
	I1207 23:12:07.531153  487084 system_pods.go:61] "kube-proxy-sdhd8" [55e62bf1-af57-4c34-925a-c44c47ce32ce] Running
	I1207 23:12:07.531157  487084 system_pods.go:61] "kube-scheduler-ha-907658" [16a4e936-d293-4107-b559-200f764f7dd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:12:07.531164  487084 system_pods.go:61] "kube-scheduler-ha-907658-m02" [85e3e5a5-fe1f-4994-90d4-c4e42a5a887f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:12:07.531175  487084 system_pods.go:61] "kube-scheduler-ha-907658-m03" [ca765146-fd0b-4cc8-9f6e-55e2601a5033] Running
	I1207 23:12:07.531178  487084 system_pods.go:61] "kube-vip-ha-907658" [2fc8fc0b-3f23-44d1-909a-20f06169c8dd] Running
	I1207 23:12:07.531181  487084 system_pods.go:61] "kube-vip-ha-907658-m02" [53a8762d-c686-486f-9814-2f40e4ff3306] Running
	I1207 23:12:07.531184  487084 system_pods.go:61] "kube-vip-ha-907658-m03" [6bc4a730-7a65-43a8-a746-2bc3ffa9ccc8] Running
	I1207 23:12:07.531186  487084 system_pods.go:61] "storage-provisioner" [5e80f8de-afe9-4c94-997c-c06f5ff985db] Running
	I1207 23:12:07.531192  487084 system_pods.go:74] duration metric: took 6.96154ms to wait for pod list to return data ...
	I1207 23:12:07.531202  487084 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:12:07.533477  487084 default_sa.go:45] found service account: "default"
	I1207 23:12:07.533501  487084 default_sa.go:55] duration metric: took 2.292892ms for default service account to be created ...
	I1207 23:12:07.533508  487084 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 23:12:07.539025  487084 system_pods.go:86] 26 kube-system pods found
	I1207 23:12:07.539051  487084 system_pods.go:89] "coredns-66bc5c9577-7lkd8" [87d8dbef-c05d-4fcd-b08e-4ee6bce689ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:12:07.539059  487084 system_pods.go:89] "coredns-66bc5c9577-j9lqh" [50fb7869-af19-4fe4-a49d-bf8431faa47e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:12:07.539067  487084 system_pods.go:89] "etcd-ha-907658" [a1045f46-63e5-4adf-8cba-698626661685] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:12:07.539072  487084 system_pods.go:89] "etcd-ha-907658-m02" [e0fd4196-c559-4ed5-a866-f2edca5d028b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:12:07.539076  487084 system_pods.go:89] "etcd-ha-907658-m03" [ec660b37-46e0-4ea6-8143-43a215cb208e] Running
	I1207 23:12:07.539080  487084 system_pods.go:89] "kindnet-5lg58" [595946fb-4b57-4869-85e2-75debf3486ae] Running
	I1207 23:12:07.539083  487084 system_pods.go:89] "kindnet-9rqhs" [78003a20-15f9-43e0-8a11-9c215ade326b] Running
	I1207 23:12:07.539087  487084 system_pods.go:89] "kindnet-hzfvq" [8c0ef1d7-39de-46ce-b16b-4d2794e7dc20] Running
	I1207 23:12:07.539090  487084 system_pods.go:89] "kindnet-wvnmz" [464814b4-64d5-4cae-b298-44186fe9b844] Running
	I1207 23:12:07.539097  487084 system_pods.go:89] "kube-apiserver-ha-907658" [746157f2-b5d4-4a22-b0d0-e186dba5c022] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:12:07.539105  487084 system_pods.go:89] "kube-apiserver-ha-907658-m02" [69e1f1f9-cc80-4383-8bf2-cd362ab2fc9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:12:07.539109  487084 system_pods.go:89] "kube-apiserver-ha-907658-m03" [6dd58630-2169-4539-b8eb-d9971aef28c0] Running
	I1207 23:12:07.539118  487084 system_pods.go:89] "kube-controller-manager-ha-907658" [86717111-1edd-4e7d-bd64-87a0b751fd53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:12:07.539123  487084 system_pods.go:89] "kube-controller-manager-ha-907658-m02" [2edf59bb-e62d-4897-9d2f-6a454cc72644] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:12:07.539127  487084 system_pods.go:89] "kube-controller-manager-ha-907658-m03" [87b33e73-dedd-477d-87fa-42e198df84ba] Running
	I1207 23:12:07.539130  487084 system_pods.go:89] "kube-proxy-8fwsf" [1d7267ee-074b-40da-bfe0-4b434d732d8c] Running
	I1207 23:12:07.539139  487084 system_pods.go:89] "kube-proxy-b8vz9" [cd4b68a6-4528-4644-bac6-158d1bffd0ed] Running
	I1207 23:12:07.539144  487084 system_pods.go:89] "kube-proxy-r5c77" [c0ba957f-b2b5-4e7a-b93a-b3619c1e4cf9] Running
	I1207 23:12:07.539153  487084 system_pods.go:89] "kube-proxy-sdhd8" [55e62bf1-af57-4c34-925a-c44c47ce32ce] Running
	I1207 23:12:07.539159  487084 system_pods.go:89] "kube-scheduler-ha-907658" [16a4e936-d293-4107-b559-200f764f7dd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:12:07.539164  487084 system_pods.go:89] "kube-scheduler-ha-907658-m02" [85e3e5a5-fe1f-4994-90d4-c4e42a5a887f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:12:07.539167  487084 system_pods.go:89] "kube-scheduler-ha-907658-m03" [ca765146-fd0b-4cc8-9f6e-55e2601a5033] Running
	I1207 23:12:07.539171  487084 system_pods.go:89] "kube-vip-ha-907658" [2fc8fc0b-3f23-44d1-909a-20f06169c8dd] Running
	I1207 23:12:07.539174  487084 system_pods.go:89] "kube-vip-ha-907658-m02" [53a8762d-c686-486f-9814-2f40e4ff3306] Running
	I1207 23:12:07.539176  487084 system_pods.go:89] "kube-vip-ha-907658-m03" [6bc4a730-7a65-43a8-a746-2bc3ffa9ccc8] Running
	I1207 23:12:07.539181  487084 system_pods.go:89] "storage-provisioner" [5e80f8de-afe9-4c94-997c-c06f5ff985db] Running
	I1207 23:12:07.539191  487084 system_pods.go:126] duration metric: took 5.677775ms to wait for k8s-apps to be running ...
	I1207 23:12:07.539200  487084 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:12:07.539244  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:12:07.552415  487084 system_svc.go:56] duration metric: took 13.204195ms WaitForService to wait for kubelet
	I1207 23:12:07.552445  487084 kubeadm.go:587] duration metric: took 203.099861ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:12:07.552461  487084 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:12:07.556717  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:07.556763  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:07.556789  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:07.556794  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:07.556800  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:07.556804  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:07.556815  487084 node_conditions.go:105] duration metric: took 4.343663ms to run NodePressure ...
	I1207 23:12:07.556830  487084 start.go:242] waiting for startup goroutines ...
	I1207 23:12:07.556864  487084 start.go:256] writing updated cluster config ...
	I1207 23:12:07.559024  487084 out.go:203] 
	I1207 23:12:07.560420  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:07.560527  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:12:07.562073  487084 out.go:179] * Starting "ha-907658-m04" worker node in "ha-907658" cluster
	I1207 23:12:07.563315  487084 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:12:07.564547  487084 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:12:07.565586  487084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:12:07.565600  487084 cache.go:65] Caching tarball of preloaded images
	I1207 23:12:07.565653  487084 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:12:07.565684  487084 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:12:07.565695  487084 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1207 23:12:07.565787  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:12:07.585455  487084 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:12:07.585473  487084 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:12:07.585488  487084 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:12:07.585525  487084 start.go:360] acquireMachinesLock for ha-907658-m04: {Name:mkbf928fa5c7c7d65c3e97ec1b1d2c403a4aafbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:12:07.585593  487084 start.go:364] duration metric: took 46.24µs to acquireMachinesLock for "ha-907658-m04"
	I1207 23:12:07.585618  487084 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:12:07.585630  487084 fix.go:54] fixHost starting: m04
	I1207 23:12:07.585905  487084 cli_runner.go:164] Run: docker container inspect ha-907658-m04 --format={{.State.Status}}
	I1207 23:12:07.603987  487084 fix.go:112] recreateIfNeeded on ha-907658-m04: state=Stopped err=<nil>
	W1207 23:12:07.604014  487084 fix.go:138] unexpected machine state, will restart: <nil>
	I1207 23:12:07.605765  487084 out.go:252] * Restarting existing docker container for "ha-907658-m04" ...
	I1207 23:12:07.605839  487084 cli_runner.go:164] Run: docker start ha-907658-m04
	I1207 23:12:07.853178  487084 cli_runner.go:164] Run: docker container inspect ha-907658-m04 --format={{.State.Status}}
	I1207 23:12:07.874755  487084 kic.go:430] container "ha-907658-m04" state is running.
	I1207 23:12:07.875212  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m04
	I1207 23:12:07.896653  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:12:07.897024  487084 machine.go:94] provisionDockerMachine start ...
	I1207 23:12:07.897151  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:07.918923  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:07.919195  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1207 23:12:07.919216  487084 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:12:07.919824  487084 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49894->127.0.0.1:33223: read: connection reset by peer
	I1207 23:12:11.048469  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658-m04
	
	I1207 23:12:11.048499  487084 ubuntu.go:182] provisioning hostname "ha-907658-m04"
	I1207 23:12:11.048563  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:11.066447  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:11.066738  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1207 23:12:11.066753  487084 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-907658-m04 && echo "ha-907658-m04" | sudo tee /etc/hostname
	I1207 23:12:11.206276  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658-m04
	
	I1207 23:12:11.206388  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:11.225667  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:11.225909  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1207 23:12:11.225925  487084 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-907658-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-907658-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-907658-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:12:11.355703  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:12:11.355747  487084 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:12:11.355789  487084 ubuntu.go:190] setting up certificates
	I1207 23:12:11.355803  487084 provision.go:84] configureAuth start
	I1207 23:12:11.355885  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m04
	I1207 23:12:11.374837  487084 provision.go:143] copyHostCerts
	I1207 23:12:11.374879  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:12:11.374918  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:12:11.374932  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:12:11.375021  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:12:11.375125  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:12:11.375155  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:12:11.375165  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:12:11.375205  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:12:11.375256  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:12:11.375278  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:12:11.375284  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:12:11.375321  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:12:11.375435  487084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.ha-907658-m04 san=[127.0.0.1 192.168.49.5 ha-907658-m04 localhost minikube]
	I1207 23:12:11.430934  487084 provision.go:177] copyRemoteCerts
	I1207 23:12:11.431006  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:12:11.431063  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:11.449187  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m04/id_rsa Username:docker}
	I1207 23:12:11.543515  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1207 23:12:11.543582  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1207 23:12:11.562188  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1207 23:12:11.562249  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 23:12:11.579970  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1207 23:12:11.580024  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:12:11.597607  487084 provision.go:87] duration metric: took 241.785948ms to configureAuth
	I1207 23:12:11.597642  487084 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:12:11.597863  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:11.597964  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:11.616041  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:11.616267  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1207 23:12:11.616282  487084 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:12:11.900554  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:12:11.900587  487084 machine.go:97] duration metric: took 4.00354246s to provisionDockerMachine
	I1207 23:12:11.900600  487084 start.go:293] postStartSetup for "ha-907658-m04" (driver="docker")
	I1207 23:12:11.900611  487084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:12:11.900667  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:12:11.900705  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:11.919920  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m04/id_rsa Username:docker}
	I1207 23:12:12.015993  487084 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:12:12.019664  487084 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:12:12.019701  487084 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:12:12.019713  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:12:12.019773  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:12:12.019880  487084 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:12:12.019892  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /etc/ssl/certs/3931252.pem
	I1207 23:12:12.020003  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:12:12.028252  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:12:12.045963  487084 start.go:296] duration metric: took 145.345162ms for postStartSetup
	I1207 23:12:12.046054  487084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:12:12.046100  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:12.064419  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m04/id_rsa Username:docker}
	I1207 23:12:12.155615  487084 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:12:12.160279  487084 fix.go:56] duration metric: took 4.57464273s for fixHost
	I1207 23:12:12.160305  487084 start.go:83] releasing machines lock for "ha-907658-m04", held for 4.574698172s
	I1207 23:12:12.160388  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m04
	I1207 23:12:12.180857  487084 out.go:179] * Found network options:
	I1207 23:12:12.182145  487084 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1207 23:12:12.183173  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	W1207 23:12:12.183195  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	W1207 23:12:12.183220  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	W1207 23:12:12.183237  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	I1207 23:12:12.183304  487084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:12:12.183368  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:12.183387  487084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:12:12.183450  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:12.203407  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m04/id_rsa Username:docker}
	I1207 23:12:12.203844  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m04/id_rsa Username:docker}
	I1207 23:12:12.357625  487084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:12:12.362541  487084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:12:12.362619  487084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:12:12.370757  487084 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 23:12:12.370785  487084 start.go:496] detecting cgroup driver to use...
	I1207 23:12:12.370818  487084 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:12:12.370864  487084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:12:12.385478  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:12:12.398446  487084 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:12:12.398518  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:12:12.413312  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:12:12.425964  487084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:12:12.508240  487084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:12:12.594377  487084 docker.go:234] disabling docker service ...
	I1207 23:12:12.594469  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:12:12.609287  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:12:12.621518  487084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:12:12.706445  487084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:12:12.788828  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:12:12.801567  487084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:12:12.815799  487084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:12:12.815866  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.824631  487084 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:12:12.824701  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.834415  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.843435  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.852233  487084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:12:12.861003  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.870357  487084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.879159  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.888283  487084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:12:12.896022  487084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:12:12.903097  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:12.988157  487084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:12:13.133593  487084 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:12:13.133671  487084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:12:13.137843  487084 start.go:564] Will wait 60s for crictl version
	I1207 23:12:13.137917  487084 ssh_runner.go:195] Run: which crictl
	I1207 23:12:13.141433  487084 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:12:13.167512  487084 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:12:13.167597  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:12:13.199036  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:12:13.229455  487084 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1207 23:12:13.230791  487084 out.go:179]   - env NO_PROXY=192.168.49.2
	I1207 23:12:13.232057  487084 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1207 23:12:13.233540  487084 cli_runner.go:164] Run: docker network inspect ha-907658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:12:13.250726  487084 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1207 23:12:13.254740  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:12:13.265197  487084 mustload.go:66] Loading cluster: ha-907658
	I1207 23:12:13.265455  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:13.265697  487084 cli_runner.go:164] Run: docker container inspect ha-907658 --format={{.State.Status}}
	I1207 23:12:13.284748  487084 host.go:66] Checking if "ha-907658" exists ...
	I1207 23:12:13.285028  487084 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658 for IP: 192.168.49.5
	I1207 23:12:13.285041  487084 certs.go:195] generating shared ca certs ...
	I1207 23:12:13.285056  487084 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:12:13.285200  487084 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:12:13.285261  487084 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:12:13.285280  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1207 23:12:13.285300  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1207 23:12:13.285317  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1207 23:12:13.285349  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1207 23:12:13.285417  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:12:13.285460  487084 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:12:13.285474  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:12:13.285512  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:12:13.285554  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:12:13.285592  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:12:13.285658  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:12:13.285698  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:13.285722  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem -> /usr/share/ca-certificates/393125.pem
	I1207 23:12:13.285741  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /usr/share/ca-certificates/3931252.pem
	I1207 23:12:13.285769  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:12:13.304120  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:12:13.322222  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:12:13.340050  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:12:13.357784  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:12:13.376383  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:12:13.395635  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:12:13.413473  487084 ssh_runner.go:195] Run: openssl version
	I1207 23:12:13.419754  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:13.427021  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:12:13.434993  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:13.439202  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:13.439267  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:13.473339  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:12:13.481399  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:12:13.488584  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:12:13.495734  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:12:13.499349  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:12:13.499394  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:12:13.534119  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:12:13.542358  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:12:13.550110  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:12:13.557923  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:12:13.561771  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:12:13.561821  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:12:13.600731  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:12:13.608915  487084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:12:13.612836  487084 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1207 23:12:13.612892  487084 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.2  false true} ...
	I1207 23:12:13.613000  487084 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-907658-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:12:13.613071  487084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:12:13.620905  487084 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:12:13.620964  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1207 23:12:13.628840  487084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1207 23:12:13.642519  487084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:12:13.655821  487084 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1207 23:12:13.660403  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:12:13.672258  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:13.756400  487084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:12:13.769720  487084 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1207 23:12:13.770008  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:13.772651  487084 out.go:179] * Verifying Kubernetes components...
	I1207 23:12:13.773857  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:13.857293  487084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:12:13.870886  487084 kapi.go:59] client config for ha-907658: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key", CAFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1207 23:12:13.870958  487084 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1207 23:12:13.871160  487084 node_ready.go:35] waiting up to 6m0s for node "ha-907658-m04" to be "Ready" ...
	I1207 23:12:13.874196  487084 node_ready.go:49] node "ha-907658-m04" is "Ready"
	I1207 23:12:13.874220  487084 node_ready.go:38] duration metric: took 3.046821ms for node "ha-907658-m04" to be "Ready" ...
	I1207 23:12:13.874233  487084 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:12:13.874273  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:12:13.886840  487084 system_svc.go:56] duration metric: took 12.598168ms WaitForService to wait for kubelet
	I1207 23:12:13.886868  487084 kubeadm.go:587] duration metric: took 117.090427ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:12:13.886885  487084 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:12:13.890337  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:13.890362  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:13.890375  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:13.890380  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:13.890386  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:13.890392  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:13.890400  487084 node_conditions.go:105] duration metric: took 3.509832ms to run NodePressure ...
	I1207 23:12:13.890416  487084 start.go:242] waiting for startup goroutines ...
	I1207 23:12:13.890446  487084 start.go:256] writing updated cluster config ...
	I1207 23:12:13.890792  487084 ssh_runner.go:195] Run: rm -f paused
	I1207 23:12:13.894562  487084 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:12:13.895171  487084 kapi.go:59] client config for ha-907658: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key", CAFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 23:12:13.903646  487084 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7lkd8" in "kube-system" namespace to be "Ready" or be gone ...
	W1207 23:12:15.910233  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:17.910533  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:20.410624  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:22.909833  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:25.410696  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:27.909729  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:29.911016  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:32.410597  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:34.410833  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:36.909456  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:38.911942  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:41.410807  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:43.910363  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:46.411526  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:48.911050  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:51.412217  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:53.910759  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:56.410211  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:58.410607  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:00.411373  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:02.910918  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:05.409687  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:07.409957  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:09.910681  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:12.410492  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:14.410764  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:16.909949  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:18.910470  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:20.911090  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:23.410279  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:25.910548  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:27.910666  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:30.410084  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:32.410161  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:34.411051  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:36.910027  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:39.410570  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:41.909517  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:43.910651  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:46.409768  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:48.410760  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:50.910511  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:52.910970  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:55.410193  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:57.410684  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:59.911085  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:01.911298  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:04.410828  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:06.910004  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:08.910803  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:11.410260  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:13.410549  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:15.911180  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:18.410236  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:20.910248  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:23.410312  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:25.909481  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:27.910308  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:29.910475  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:32.410112  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:34.910739  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:37.410174  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:39.410772  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:41.910812  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:44.409997  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:46.410369  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:48.910126  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:50.910698  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:53.410089  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:55.410604  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:57.910049  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:59.910503  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:02.409755  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:04.909540  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:06.910504  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:09.409997  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:11.411142  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:13.910274  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:16.410995  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:18.909895  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:20.909974  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:22.910657  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:25.410074  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:27.410196  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:29.410456  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:31.910828  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:34.410231  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:36.410432  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:38.909644  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:40.910092  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:42.910856  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:45.409802  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:47.410082  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:49.410149  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:51.910490  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:54.409927  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:56.410532  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:58.909671  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:00.910288  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:02.910545  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:05.410175  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:07.909887  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:09.910041  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:11.910457  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	I1207 23:16:13.895206  487084 pod_ready.go:86] duration metric: took 3m59.991503796s for pod "coredns-66bc5c9577-7lkd8" in "kube-system" namespace to be "Ready" or be gone ...
	W1207 23:16:13.895245  487084 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-dns" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1207 23:16:13.895263  487084 pod_ready.go:40] duration metric: took 4m0.000670566s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:16:13.897256  487084 out.go:203] 
	W1207 23:16:13.898559  487084 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1207 23:16:13.899846  487084 out.go:203] 
	
	
	==> CRI-O <==
	Dec 07 23:12:03 ha-907658 crio[574]: time="2025-12-07T23:12:03.419320979Z" level=info msg="Started container" PID=1066 containerID=59632406be56295008167128b06b3d246e8cb935a790ce61ab27d7c9a0210c7a description=default/busybox-7b57f96db7-wts8f/busybox id=7b19c8e0-1b80-4d6a-a660-59d86bda3787 name=/runtime.v1.RuntimeService/StartContainer sandboxID=974bf02e23133aac017f3d339f396c28ca8b3d88a654f87bb690e5359126f72a
	Dec 07 23:12:03 ha-907658 crio[574]: time="2025-12-07T23:12:03.42219102Z" level=info msg="Created container b66756d6bf8454e51e71c9a010e9f000c2d6f65f4202832cc7a3a3bf546e9566: kube-system/kube-proxy-r5c77/kube-proxy" id=6c2d44d8-af9b-488e-a8fa-96cfda6ad07e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:12:03 ha-907658 crio[574]: time="2025-12-07T23:12:03.422764701Z" level=info msg="Starting container: b66756d6bf8454e51e71c9a010e9f000c2d6f65f4202832cc7a3a3bf546e9566" id=f4e610f6-9234-460c-ab15-e7f9e1e22236 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:12:03 ha-907658 crio[574]: time="2025-12-07T23:12:03.423187163Z" level=info msg="Created container c6e4a88e898128e18b3156f394f70fd2b7676c0a3014577d38064cdc4c08e233: default/busybox-7b57f96db7-dslrx/busybox" id=947f78d0-ea74-4827-abe4-b36a0b7703f5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:12:03 ha-907658 crio[574]: time="2025-12-07T23:12:03.423803868Z" level=info msg="Starting container: c6e4a88e898128e18b3156f394f70fd2b7676c0a3014577d38064cdc4c08e233" id=f8c5be5c-7fca-4d32-8a6c-68008559df07 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:12:03 ha-907658 crio[574]: time="2025-12-07T23:12:03.425692066Z" level=info msg="Started container" PID=1071 containerID=c6e4a88e898128e18b3156f394f70fd2b7676c0a3014577d38064cdc4c08e233 description=default/busybox-7b57f96db7-dslrx/busybox id=f8c5be5c-7fca-4d32-8a6c-68008559df07 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fee9745be2801cab826368bca687acad119bd0bddcf3bddfe083e1bc37ec0a2e
	Dec 07 23:12:03 ha-907658 crio[574]: time="2025-12-07T23:12:03.425952275Z" level=info msg="Started container" PID=1065 containerID=b66756d6bf8454e51e71c9a010e9f000c2d6f65f4202832cc7a3a3bf546e9566 description=kube-system/kube-proxy-r5c77/kube-proxy id=f4e610f6-9234-460c-ab15-e7f9e1e22236 name=/runtime.v1.RuntimeService/StartContainer sandboxID=81d062f869179dcf8073b42df610726a49898283cc3b7b1c4382936f244009bc
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.828232315Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.832561313Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.832595738Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.832614781Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.836515238Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.836547213Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.836564322Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.840132316Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.840156246Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.840172174Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.844126033Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.844147287Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.8441679Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.847881335Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.84790256Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.847918681Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.851426018Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.851446887Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	c6e4a88e89812       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   4 minutes ago       Running             busybox                   2                   fee9745be2801       busybox-7b57f96db7-dslrx            default
	59632406be562       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   4 minutes ago       Running             busybox                   2                   974bf02e23133       busybox-7b57f96db7-wts8f            default
	b66756d6bf845       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   4 minutes ago       Running             kube-proxy                0                   81d062f869179       kube-proxy-r5c77                    kube-system
	6e24622fde46e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 minutes ago       Running             kindnet-cni               0                   91e6c1a0bfdf0       kindnet-hzfvq                       kube-system
	86601d9f6ba07       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   4 minutes ago       Running             kube-controller-manager   0                   b67664be25ec4       kube-controller-manager-ha-907658   kube-system
	3102169518f14       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   4 minutes ago       Running             etcd                      0                   54905301bb684       etcd-ha-907658                      kube-system
	87abab3f9975c       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   4 minutes ago       Running             kube-apiserver            0                   56a831ff3eb23       kube-apiserver-ha-907658            kube-system
	db1d97b687400       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   4 minutes ago       Running             kube-scheduler            0                   cae40eeeedff8       kube-scheduler-ha-907658            kube-system
	04ab6dc0a72c2       6a2e30457bbed0ffdc161ff0131dfcfe9099692717f3d1bcae88b9db3d5a033c   4 minutes ago       Running             kube-vip                  0                   a3d8fbda9f509       kube-vip-ha-907658                  kube-system
	
	
	==> describe nodes <==
	Name:               ha-907658
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-907658
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=ha-907658
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_06_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:06:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-907658
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:16:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:16:07 +0000   Sun, 07 Dec 2025 23:06:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:16:07 +0000   Sun, 07 Dec 2025 23:06:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:16:07 +0000   Sun, 07 Dec 2025 23:06:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:16:07 +0000   Sun, 07 Dec 2025 23:07:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-907658
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                f44bac47-757c-4c31-8a75-ef9ebb40422e
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-dslrx             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m5s
	  default                     busybox-7b57f96db7-wts8f             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m5s
	  kube-system                 etcd-ha-907658                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m36s
	  kube-system                 kindnet-hzfvq                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m31s
	  kube-system                 kube-apiserver-ha-907658             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m36s
	  kube-system                 kube-controller-manager-ha-907658    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m36s
	  kube-system                 kube-proxy-r5c77                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m31s
	  kube-system                 kube-scheduler-ha-907658             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m37s
	  kube-system                 kube-vip-ha-907658                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m13s                  kube-proxy       
	  Normal  Starting                 5m39s                  kube-proxy       
	  Normal  Starting                 9m30s                  kube-proxy       
	  Normal  Starting                 9m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     9m36s                  kubelet          Node ha-907658 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    9m36s                  kubelet          Node ha-907658 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  9m36s                  kubelet          Node ha-907658 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           9m32s                  node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  RegisteredNode           9m10s                  node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  NodeReady                8m49s                  kubelet          Node ha-907658 status is now: NodeReady
	  Normal  RegisteredNode           8m40s                  node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  RegisteredNode           6m51s                  node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  Starting                 5m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     5m54s (x8 over 5m55s)  kubelet          Node ha-907658 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m54s (x8 over 5m55s)  kubelet          Node ha-907658 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m54s (x8 over 5m55s)  kubelet          Node ha-907658 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           5m39s                  node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  RegisteredNode           5m39s                  node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  RegisteredNode           5m35s                  node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m18s (x8 over 4m18s)  kubelet          Node ha-907658 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s (x8 over 4m18s)  kubelet          Node ha-907658 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s (x8 over 4m18s)  kubelet          Node ha-907658 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	
	
	Name:               ha-907658-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-907658-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=ha-907658
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_07T23_07_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:07:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-907658-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:16:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:16:08 +0000   Sun, 07 Dec 2025 23:07:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:16:08 +0000   Sun, 07 Dec 2025 23:07:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:16:08 +0000   Sun, 07 Dec 2025 23:07:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:16:08 +0000   Sun, 07 Dec 2025 23:12:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-907658-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                c4423b9c-a5a3-462a-aa6c-dc14a3add1e7
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-sd5gw                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m5s
	  kube-system                 coredns-66bc5c9577-7lkd8                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m30s
	  kube-system                 coredns-66bc5c9577-j9lqh                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m30s
	  kube-system                 etcd-ha-907658-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m11s
	  kube-system                 kindnet-wvnmz                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m12s
	  kube-system                 kube-apiserver-ha-907658-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m11s
	  kube-system                 kube-controller-manager-ha-907658-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m11s
	  kube-system                 kube-proxy-sdhd8                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-scheduler-ha-907658-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m11s
	  kube-system                 kube-vip-ha-907658-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m10s                  kube-proxy       
	  Normal  Starting                 5m40s                  kube-proxy       
	  Normal  Starting                 9m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     9m15s (x8 over 9m15s)  kubelet          Node ha-907658-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    9m15s (x8 over 9m15s)  kubelet          Node ha-907658-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  9m15s (x8 over 9m15s)  kubelet          Node ha-907658-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           9m12s                  node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  RegisteredNode           9m10s                  node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  RegisteredNode           8m40s                  node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  NodeHasSufficientPID     6m57s (x8 over 6m57s)  kubelet          Node ha-907658-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m57s (x8 over 6m57s)  kubelet          Node ha-907658-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m57s (x8 over 6m57s)  kubelet          Node ha-907658-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m57s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           6m51s                  node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  Starting                 5m53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m53s (x8 over 5m53s)  kubelet          Node ha-907658-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m53s (x8 over 5m53s)  kubelet          Node ha-907658-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m53s (x8 over 5m53s)  kubelet          Node ha-907658-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m39s                  node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  RegisteredNode           5m39s                  node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  RegisteredNode           5m35s                  node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m17s (x8 over 4m17s)  kubelet          Node ha-907658-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s (x8 over 4m17s)  kubelet          Node ha-907658-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s (x8 over 4m17s)  kubelet          Node ha-907658-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	
	
	Name:               ha-907658-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-907658-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=ha-907658
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_07T23_08_29_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:08:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-907658-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:16:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:16:15 +0000   Sun, 07 Dec 2025 23:08:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:16:15 +0000   Sun, 07 Dec 2025 23:08:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:16:15 +0000   Sun, 07 Dec 2025 23:08:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:16:15 +0000   Sun, 07 Dec 2025 23:08:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-907658-m04
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                f80b86e6-d691-401f-8493-d6f45994affe
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9rqhs       100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m48s
	  kube-system                 kube-proxy-b8vz9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m9s                   kube-proxy       
	  Normal  Starting                 7m46s                  kube-proxy       
	  Normal  Starting                 3m41s                  kube-proxy       
	  Normal  NodeHasSufficientPID     7m48s (x3 over 7m48s)  kubelet          Node ha-907658-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    7m48s (x3 over 7m48s)  kubelet          Node ha-907658-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m48s (x3 over 7m48s)  kubelet          Node ha-907658-m04 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           7m47s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  RegisteredNode           7m45s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  RegisteredNode           7m45s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  NodeReady                7m35s                  kubelet          Node ha-907658-m04 status is now: NodeReady
	  Normal  RegisteredNode           6m51s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  RegisteredNode           5m39s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  RegisteredNode           5m39s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  RegisteredNode           5m35s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  Starting                 5m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m24s (x8 over 5m27s)  kubelet          Node ha-907658-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m24s (x8 over 5m27s)  kubelet          Node ha-907658-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m24s (x8 over 5m27s)  kubelet          Node ha-907658-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  Starting                 4m9s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m6s (x8 over 4m9s)    kubelet          Node ha-907658-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x8 over 4m9s)    kubelet          Node ha-907658-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x8 over 4m9s)    kubelet          Node ha-907658-m04 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.005623] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.496066] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.005917] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[Dec 7 23:16] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.005986] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.495337] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006100] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494663] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.005540] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.496122] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.005022] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.496083] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.004265] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.497368] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.004145] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.496882] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.004333] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.496983] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.004653] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.496735] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.003847] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.496975] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.003954] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.496836] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.004082] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	
	
	==> etcd [3102169518f14fb026edc01e1247ff4c2edc1292fb8d6ddab3310dc29262b65d] <==
	{"level":"warn","ts":"2025-12-07T23:12:02.189592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.196728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.215628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.224754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.237034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.246470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.252727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.261173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.271732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.278843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.288369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.296456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.305017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.312949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.321771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.329387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.336384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.348809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.354004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.362664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.369994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.387625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.392081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.399402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:12:02.408031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45760","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:16:17 up  1:58,  0 user,  load average: 0.36, 1.17, 1.53
	Linux ha-907658 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6e24622fde46e804a62af01a0bc9c1984d71da811c0cb4227298bc171e53fbb1] <==
	I1207 23:15:33.828569       1 main.go:324] Node ha-907658-m04 has CIDR [10.244.3.0/24] 
	I1207 23:15:43.828178       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1207 23:15:43.828259       1 main.go:324] Node ha-907658-m02 has CIDR [10.244.1.0/24] 
	I1207 23:15:43.828889       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1207 23:15:43.828920       1 main.go:324] Node ha-907658-m04 has CIDR [10.244.3.0/24] 
	I1207 23:15:43.829151       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:15:43.829177       1 main.go:301] handling current node
	I1207 23:15:53.829598       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:15:53.829646       1 main.go:301] handling current node
	I1207 23:15:53.829666       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1207 23:15:53.829673       1 main.go:324] Node ha-907658-m02 has CIDR [10.244.1.0/24] 
	I1207 23:15:53.829901       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1207 23:15:53.829912       1 main.go:324] Node ha-907658-m04 has CIDR [10.244.3.0/24] 
	I1207 23:16:03.827509       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1207 23:16:03.827541       1 main.go:324] Node ha-907658-m04 has CIDR [10.244.3.0/24] 
	I1207 23:16:03.827716       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:16:03.827727       1 main.go:301] handling current node
	I1207 23:16:03.827738       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1207 23:16:03.827742       1 main.go:324] Node ha-907658-m02 has CIDR [10.244.1.0/24] 
	I1207 23:16:13.832500       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:16:13.832542       1 main.go:301] handling current node
	I1207 23:16:13.832563       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1207 23:16:13.832569       1 main.go:324] Node ha-907658-m02 has CIDR [10.244.1.0/24] 
	I1207 23:16:13.832817       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1207 23:16:13.832843       1 main.go:324] Node ha-907658-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [87abab3f9975c7d1ffa51c90a94a832599db31aa8d9e2e4cdcccfa593c87020f] <==
	I1207 23:12:03.040289       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1207 23:12:03.040464       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1207 23:12:03.040505       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1207 23:12:03.040770       1 aggregator.go:171] initial CRD sync complete...
	I1207 23:12:03.040809       1 autoregister_controller.go:144] Starting autoregister controller
	I1207 23:12:03.040832       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1207 23:12:03.040883       1 cache.go:39] Caches are synced for autoregister controller
	I1207 23:12:03.041299       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1207 23:12:03.041943       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1207 23:12:03.042481       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1207 23:12:03.042740       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1207 23:12:03.049189       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1207 23:12:03.051184       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1207 23:12:03.058680       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1207 23:12:03.058715       1 policy_source.go:240] refreshing policies
	E1207 23:12:03.062917       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1207 23:12:03.092652       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 23:12:03.204088       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 23:12:03.945462       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1207 23:12:04.372374       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1207 23:12:04.373818       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 23:12:04.380398       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:12:06.632914       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 23:12:06.742193       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 23:12:06.884554       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [86601d9f6ba07c5cc957fcd84ee14c9ed14e0f86e2c332659c8fd9ca9c473cdd] <==
	I1207 23:12:06.403290       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1207 23:12:26.377390       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	E1207 23:12:26.377430       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	E1207 23:12:26.377438       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	E1207 23:12:26.377446       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	E1207 23:12:26.377453       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	E1207 23:12:46.377569       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	E1207 23:12:46.377609       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	E1207 23:12:46.377617       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	E1207 23:12:46.377626       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	E1207 23:12:46.377632       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	I1207 23:12:46.388648       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-5lg58"
	I1207 23:12:46.410719       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-5lg58"
	I1207 23:12:46.411071       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-907658-m03"
	I1207 23:12:46.433046       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-907658-m03"
	I1207 23:12:46.433163       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-907658-m03"
	I1207 23:12:46.454493       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-907658-m03"
	I1207 23:12:46.454614       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-8fwsf"
	I1207 23:12:46.480073       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-8fwsf"
	I1207 23:12:46.480362       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-907658-m03"
	I1207 23:12:46.506233       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-907658-m03"
	I1207 23:12:46.506270       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-907658-m03"
	I1207 23:12:46.539150       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-907658-m03"
	I1207 23:12:46.539211       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-907658-m03"
	I1207 23:12:46.557024       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-907658-m03"
	
	
	==> kube-proxy [b66756d6bf8454e51e71c9a010e9f000c2d6f65f4202832cc7a3a3bf546e9566] <==
	I1207 23:12:03.463144       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:12:03.526682       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 23:12:03.627174       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 23:12:03.627210       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 23:12:03.627301       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:12:03.644894       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:12:03.644940       1 server_linux.go:132] "Using iptables Proxier"
	I1207 23:12:03.650181       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:12:03.650669       1 server.go:527] "Version info" version="v1.34.2"
	I1207 23:12:03.650718       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:12:03.653161       1 config.go:200] "Starting service config controller"
	I1207 23:12:03.653188       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:12:03.653219       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:12:03.653225       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:12:03.653244       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:12:03.653256       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:12:03.653346       1 config.go:309] "Starting node config controller"
	I1207 23:12:03.653353       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:12:03.653366       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:12:03.753518       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 23:12:03.753552       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 23:12:03.753868       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [db1d97b6874004dcfa1bfc301e8470ac6e8ab810f5002178c4d64e0899af2340] <==
	I1207 23:11:59.847303       1 serving.go:386] Generated self-signed cert in-memory
	I1207 23:12:03.025213       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1207 23:12:03.025271       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:12:03.035813       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1207 23:12:03.035844       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:12:03.035857       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 23:12:03.035870       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:12:03.035870       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1207 23:12:03.035879       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 23:12:03.036226       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 23:12:03.036552       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 23:12:03.136624       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 23:12:03.136650       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:12:03.136707       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Dec 07 23:12:00 ha-907658 kubelet[746]: E1207 23:12:00.081635     746 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-907658\" not found" node="ha-907658"
	Dec 07 23:12:01 ha-907658 kubelet[746]: E1207 23:12:01.083780     746 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-907658\" not found" node="ha-907658"
	Dec 07 23:12:01 ha-907658 kubelet[746]: E1207 23:12:01.083932     746 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-907658\" not found" node="ha-907658"
	Dec 07 23:12:01 ha-907658 kubelet[746]: E1207 23:12:01.084030     746 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-907658\" not found" node="ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.042925     746 apiserver.go:52] "Watching apiserver"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.045963     746 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: E1207 23:12:03.069383     746 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-ha-907658\" already exists" pod="kube-system/etcd-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.069626     746 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: E1207 23:12:03.087189     746 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-907658\" already exists" pod="kube-system/kube-apiserver-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.091705     746 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.100510     746 kubelet_node_status.go:124] "Node was previously registered" node="ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.100646     746 kubelet_node_status.go:78] "Successfully registered node" node="ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.100685     746 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.101661     746 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 07 23:12:03 ha-907658 kubelet[746]: E1207 23:12:03.104485     746 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-907658\" already exists" pod="kube-system/kube-controller-manager-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.104628     746 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: E1207 23:12:03.115174     746 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-907658\" already exists" pod="kube-system/kube-scheduler-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.115385     746 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: E1207 23:12:03.125044     746 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-vip-ha-907658\" already exists" pod="kube-system/kube-vip-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.146852     746 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.199347     746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0ba957f-b2b5-4e7a-b93a-b3619c1e4cf9-xtables-lock\") pod \"kube-proxy-r5c77\" (UID: \"c0ba957f-b2b5-4e7a-b93a-b3619c1e4cf9\") " pod="kube-system/kube-proxy-r5c77"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.199404     746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c0ef1d7-39de-46ce-b16b-4d2794e7dc20-lib-modules\") pod \"kindnet-hzfvq\" (UID: \"8c0ef1d7-39de-46ce-b16b-4d2794e7dc20\") " pod="kube-system/kindnet-hzfvq"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.200064     746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8c0ef1d7-39de-46ce-b16b-4d2794e7dc20-cni-cfg\") pod \"kindnet-hzfvq\" (UID: \"8c0ef1d7-39de-46ce-b16b-4d2794e7dc20\") " pod="kube-system/kindnet-hzfvq"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.200129     746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c0ef1d7-39de-46ce-b16b-4d2794e7dc20-xtables-lock\") pod \"kindnet-hzfvq\" (UID: \"8c0ef1d7-39de-46ce-b16b-4d2794e7dc20\") " pod="kube-system/kindnet-hzfvq"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.200193     746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0ba957f-b2b5-4e7a-b93a-b3619c1e4cf9-lib-modules\") pod \"kube-proxy-r5c77\" (UID: \"c0ba957f-b2b5-4e7a-b93a-b3619c1e4cf9\") " pod="kube-system/kube-proxy-r5c77"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-907658 -n ha-907658
helpers_test.go:269: (dbg) Run:  kubectl --context ha-907658 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:309: expected profile "ha-907658" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-907658\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-907658\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.2\",\"ClusterName\":\"ha-907658\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"N
ame\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.168.49.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-p
lugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":fals
e,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-907658
helpers_test.go:243: (dbg) docker inspect ha-907658:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b18b557fea95c806a3bf174d1482bc2a7fdb2737b9fcb5b0eeea6e687f5d8adf",
	        "Created": "2025-12-07T23:06:25.641182516Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 487285,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:11:52.946976582Z",
	            "FinishedAt": "2025-12-07T23:11:52.180976562Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/b18b557fea95c806a3bf174d1482bc2a7fdb2737b9fcb5b0eeea6e687f5d8adf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b18b557fea95c806a3bf174d1482bc2a7fdb2737b9fcb5b0eeea6e687f5d8adf/hostname",
	        "HostsPath": "/var/lib/docker/containers/b18b557fea95c806a3bf174d1482bc2a7fdb2737b9fcb5b0eeea6e687f5d8adf/hosts",
	        "LogPath": "/var/lib/docker/containers/b18b557fea95c806a3bf174d1482bc2a7fdb2737b9fcb5b0eeea6e687f5d8adf/b18b557fea95c806a3bf174d1482bc2a7fdb2737b9fcb5b0eeea6e687f5d8adf-json.log",
	        "Name": "/ha-907658",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-907658:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-907658",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b18b557fea95c806a3bf174d1482bc2a7fdb2737b9fcb5b0eeea6e687f5d8adf",
	                "LowerDir": "/var/lib/docker/overlay2/95f4d37acd9603eb9082e08eb2b25d1d911e5a215fb4e71b00c8c77b90dafbc3-init/diff:/var/lib/docker/overlay2/d2e9c5481c0f5ed3745e4b3c85b207e8e3f273f5a1d285f7bc7bfa20976ad16e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/95f4d37acd9603eb9082e08eb2b25d1d911e5a215fb4e71b00c8c77b90dafbc3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/95f4d37acd9603eb9082e08eb2b25d1d911e5a215fb4e71b00c8c77b90dafbc3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/95f4d37acd9603eb9082e08eb2b25d1d911e5a215fb4e71b00c8c77b90dafbc3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-907658",
	                "Source": "/var/lib/docker/volumes/ha-907658/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-907658",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-907658",
	                "name.minikube.sigs.k8s.io": "ha-907658",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ba5e035333284e7ec191aa45f8e8f710a1211614ee9390e57a685e532fd2b7d0",
	            "SandboxKey": "/var/run/docker/netns/ba5e03533328",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33213"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33214"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33217"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33215"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33216"
	                    }
	                ]
	            },
	            "Networks": {
	                "ha-907658": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "918c8f4f6e86f6f20607e87a6beb39a8a1d64cc9183e3317d1968551e79c40e2",
	                    "EndpointID": "39156e34f46c5c2dd2e2dd90a72a9e93d4aca46c4dae46d6dd8bcd5fd820e723",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "d2:5b:58:4b:cd:fa",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-907658",
	                        "b18b557fea95"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-907658 -n ha-907658
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-907658 logs -n 25: (1.122972535s)
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-907658 ssh -n ha-907658-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m04 sudo cat /home/docker/cp-test_ha-907658-m03_ha-907658-m04.txt                                         │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ cp      │ ha-907658 cp testdata/cp-test.txt ha-907658-m04:/home/docker/cp-test.txt                                                             │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ cp      │ ha-907658 cp ha-907658-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2786965912/001/cp-test_ha-907658-m04.txt │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ cp      │ ha-907658 cp ha-907658-m04:/home/docker/cp-test.txt ha-907658:/home/docker/cp-test_ha-907658-m04_ha-907658.txt                       │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658 sudo cat /home/docker/cp-test_ha-907658-m04_ha-907658.txt                                                 │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:08 UTC │ 07 Dec 25 23:08 UTC │
	│ cp      │ ha-907658 cp ha-907658-m04:/home/docker/cp-test.txt ha-907658-m02:/home/docker/cp-test_ha-907658-m04_ha-907658-m02.txt               │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m02 sudo cat /home/docker/cp-test_ha-907658-m04_ha-907658-m02.txt                                         │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ cp      │ ha-907658 cp ha-907658-m04:/home/docker/cp-test.txt ha-907658-m03:/home/docker/cp-test_ha-907658-m04_ha-907658-m03.txt               │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ ssh     │ ha-907658 ssh -n ha-907658-m03 sudo cat /home/docker/cp-test_ha-907658-m04_ha-907658-m03.txt                                         │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ node    │ ha-907658 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ node    │ ha-907658 node start m02 --alsologtostderr -v 5                                                                                      │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:09 UTC │
	│ node    │ ha-907658 node list --alsologtostderr -v 5                                                                                           │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │                     │
	│ stop    │ ha-907658 stop --alsologtostderr -v 5                                                                                                │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:09 UTC │ 07 Dec 25 23:10 UTC │
	│ start   │ ha-907658 start --wait true --alsologtostderr -v 5                                                                                   │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:10 UTC │ 07 Dec 25 23:11 UTC │
	│ node    │ ha-907658 node list --alsologtostderr -v 5                                                                                           │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:11 UTC │                     │
	│ node    │ ha-907658 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:11 UTC │ 07 Dec 25 23:11 UTC │
	│ stop    │ ha-907658 stop --alsologtostderr -v 5                                                                                                │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:11 UTC │ 07 Dec 25 23:11 UTC │
	│ start   │ ha-907658 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:11 UTC │                     │
	│ node    │ ha-907658 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-907658 │ jenkins │ v1.37.0 │ 07 Dec 25 23:16 UTC │ 07 Dec 25 23:17 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:11:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:11:52.723208  487084 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:11:52.723342  487084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:11:52.723354  487084 out.go:374] Setting ErrFile to fd 2...
	I1207 23:11:52.723361  487084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:11:52.723559  487084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:11:52.724064  487084 out.go:368] Setting JSON to false
	I1207 23:11:52.725035  487084 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6857,"bootTime":1765142256,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:11:52.725102  487084 start.go:143] virtualization: kvm guest
	I1207 23:11:52.726965  487084 out.go:179] * [ha-907658] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:11:52.728170  487084 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:11:52.728167  487084 notify.go:221] Checking for updates...
	I1207 23:11:52.730209  487084 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:11:52.731286  487084 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:11:52.732435  487084 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:11:52.733509  487084 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:11:52.734621  487084 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:11:52.736265  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:11:52.736931  487084 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:11:52.761948  487084 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:11:52.762088  487084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:11:52.815796  487084 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-07 23:11:52.805859782 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:11:52.815895  487084 docker.go:319] overlay module found
	I1207 23:11:52.818644  487084 out.go:179] * Using the docker driver based on existing profile
	I1207 23:11:52.819812  487084 start.go:309] selected driver: docker
	I1207 23:11:52.819828  487084 start.go:927] validating driver "docker" against &{Name:ha-907658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:11:52.819961  487084 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:11:52.820059  487084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:11:52.873900  487084 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:44 SystemTime:2025-12-07 23:11:52.864641727 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:11:52.874579  487084 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:11:52.874614  487084 cni.go:84] Creating CNI manager for ""
	I1207 23:11:52.874670  487084 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1207 23:11:52.874722  487084 start.go:353] cluster config:
	{Name:ha-907658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:11:52.876967  487084 out.go:179] * Starting "ha-907658" primary control-plane node in "ha-907658" cluster
	I1207 23:11:52.877923  487084 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:11:52.878975  487084 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:11:52.880201  487084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:11:52.880231  487084 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1207 23:11:52.880239  487084 cache.go:65] Caching tarball of preloaded images
	I1207 23:11:52.880300  487084 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:11:52.880362  487084 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:11:52.880377  487084 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1207 23:11:52.880537  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:11:52.900771  487084 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:11:52.900792  487084 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:11:52.900810  487084 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:11:52.900849  487084 start.go:360] acquireMachinesLock for ha-907658: {Name:mkd7016770bc40ef9cd544023d232b92bc7cf832 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:11:52.900927  487084 start.go:364] duration metric: took 42.672µs to acquireMachinesLock for "ha-907658"
	I1207 23:11:52.900952  487084 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:11:52.900961  487084 fix.go:54] fixHost starting: 
	I1207 23:11:52.901168  487084 cli_runner.go:164] Run: docker container inspect ha-907658 --format={{.State.Status}}
	I1207 23:11:52.918459  487084 fix.go:112] recreateIfNeeded on ha-907658: state=Stopped err=<nil>
	W1207 23:11:52.918485  487084 fix.go:138] unexpected machine state, will restart: <nil>
	I1207 23:11:52.920300  487084 out.go:252] * Restarting existing docker container for "ha-907658" ...
	I1207 23:11:52.920381  487084 cli_runner.go:164] Run: docker start ha-907658
	I1207 23:11:53.154762  487084 cli_runner.go:164] Run: docker container inspect ha-907658 --format={{.State.Status}}
	I1207 23:11:53.172884  487084 kic.go:430] container "ha-907658" state is running.
	I1207 23:11:53.173368  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658
	I1207 23:11:53.192850  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:11:53.193082  487084 machine.go:94] provisionDockerMachine start ...
	I1207 23:11:53.193169  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:53.211683  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:11:53.211988  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1207 23:11:53.212008  487084 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:11:53.212567  487084 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40796->127.0.0.1:33213: read: connection reset by peer
	I1207 23:11:56.342986  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658
	
	I1207 23:11:56.343016  487084 ubuntu.go:182] provisioning hostname "ha-907658"
	I1207 23:11:56.343087  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:56.361678  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:11:56.361914  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1207 23:11:56.361928  487084 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-907658 && echo "ha-907658" | sudo tee /etc/hostname
	I1207 23:11:56.498208  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658
	
	I1207 23:11:56.498287  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:56.517144  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:11:56.517409  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1207 23:11:56.517428  487084 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-907658' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-907658/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-907658' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:11:56.645103  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:11:56.645138  487084 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:11:56.645173  487084 ubuntu.go:190] setting up certificates
	I1207 23:11:56.645187  487084 provision.go:84] configureAuth start
	I1207 23:11:56.645254  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658
	I1207 23:11:56.663482  487084 provision.go:143] copyHostCerts
	I1207 23:11:56.663535  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:11:56.663565  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:11:56.663574  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:11:56.663652  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:11:56.663767  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:11:56.663794  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:11:56.663802  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:11:56.663845  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:11:56.663928  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:11:56.663951  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:11:56.663961  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:11:56.663999  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:11:56.664154  487084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.ha-907658 san=[127.0.0.1 192.168.49.2 ha-907658 localhost minikube]
	I1207 23:11:56.859476  487084 provision.go:177] copyRemoteCerts
	I1207 23:11:56.859539  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:11:56.859583  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:56.877854  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:11:56.971727  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1207 23:11:56.971784  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1207 23:11:56.989675  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1207 23:11:56.989726  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 23:11:57.006645  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1207 23:11:57.006699  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:11:57.024214  487084 provision.go:87] duration metric: took 379.007514ms to configureAuth
	I1207 23:11:57.024242  487084 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:11:57.024505  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:11:57.024648  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:57.043106  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:11:57.043322  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I1207 23:11:57.043362  487084 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:11:57.351275  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:11:57.351301  487084 machine.go:97] duration metric: took 4.158205159s to provisionDockerMachine
	I1207 23:11:57.351316  487084 start.go:293] postStartSetup for "ha-907658" (driver="docker")
	I1207 23:11:57.351345  487084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:11:57.351414  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:11:57.351463  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:57.370902  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:11:57.463959  487084 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:11:57.467550  487084 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:11:57.467577  487084 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:11:57.467590  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:11:57.467657  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:11:57.467762  487084 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:11:57.467778  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /etc/ssl/certs/3931252.pem
	I1207 23:11:57.467888  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:11:57.475351  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:11:57.492383  487084 start.go:296] duration metric: took 141.051455ms for postStartSetup
	I1207 23:11:57.492490  487084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:11:57.492538  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:57.510719  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:11:57.601727  487084 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:11:57.606180  487084 fix.go:56] duration metric: took 4.705212142s for fixHost
	I1207 23:11:57.606209  487084 start.go:83] releasing machines lock for "ha-907658", held for 4.705267868s
	I1207 23:11:57.606320  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658
	I1207 23:11:57.624104  487084 ssh_runner.go:195] Run: cat /version.json
	I1207 23:11:57.624182  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:57.624209  487084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:11:57.624294  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:11:57.642922  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:11:57.643662  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:11:57.785793  487084 ssh_runner.go:195] Run: systemctl --version
	I1207 23:11:57.792308  487084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:11:57.826743  487084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:11:57.831572  487084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:11:57.831644  487084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:11:57.839631  487084 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 23:11:57.839653  487084 start.go:496] detecting cgroup driver to use...
	I1207 23:11:57.839690  487084 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:11:57.839733  487084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:11:57.853650  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:11:57.866122  487084 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:11:57.866194  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:11:57.880612  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:11:57.893020  487084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:11:57.971718  487084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:11:58.051170  487084 docker.go:234] disabling docker service ...
	I1207 23:11:58.051240  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:11:58.065815  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:11:58.078071  487084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:11:58.159158  487084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:11:58.241617  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:11:58.253808  487084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:11:58.267810  487084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:11:58.267865  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.276619  487084 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:11:58.276694  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.285159  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.293362  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.301983  487084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:11:58.310270  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.319027  487084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.327563  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:11:58.336683  487084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:11:58.344663  487084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:11:58.352591  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:11:58.430723  487084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:11:58.561670  487084 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:11:58.561748  487084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:11:58.565839  487084 start.go:564] Will wait 60s for crictl version
	I1207 23:11:58.565925  487084 ssh_runner.go:195] Run: which crictl
	I1207 23:11:58.569353  487084 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:11:58.593853  487084 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:11:58.593949  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:11:58.621201  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:11:58.650380  487084 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1207 23:11:58.651543  487084 cli_runner.go:164] Run: docker network inspect ha-907658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:11:58.669539  487084 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1207 23:11:58.673718  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:11:58.684392  487084 kubeadm.go:884] updating cluster {Name:ha-907658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:11:58.684550  487084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:11:58.684610  487084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:11:58.716893  487084 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:11:58.716915  487084 crio.go:433] Images already preloaded, skipping extraction
	I1207 23:11:58.717012  487084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:11:58.743428  487084 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:11:58.743474  487084 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:11:58.743483  487084 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1207 23:11:58.743593  487084 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-907658 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:11:58.743655  487084 ssh_runner.go:195] Run: crio config
	I1207 23:11:58.789302  487084 cni.go:84] Creating CNI manager for ""
	I1207 23:11:58.789345  487084 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1207 23:11:58.789368  487084 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 23:11:58.789396  487084 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-907658 NodeName:ha-907658 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:11:58.789521  487084 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-907658"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:11:58.789548  487084 kube-vip.go:115] generating kube-vip config ...
	I1207 23:11:58.789589  487084 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1207 23:11:58.801884  487084 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:11:58.802014  487084 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1207 23:11:58.802092  487084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:11:58.809827  487084 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:11:58.809897  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1207 23:11:58.817290  487084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1207 23:11:58.829895  487084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:11:58.842148  487084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1207 23:11:58.854128  487084 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1207 23:11:58.866494  487084 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1207 23:11:58.870208  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:11:58.879832  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:11:58.957062  487084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:11:58.981696  487084 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658 for IP: 192.168.49.2
	I1207 23:11:58.981720  487084 certs.go:195] generating shared ca certs ...
	I1207 23:11:58.981747  487084 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:58.981923  487084 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:11:58.981976  487084 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:11:58.981990  487084 certs.go:257] generating profile certs ...
	I1207 23:11:58.982095  487084 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key
	I1207 23:11:58.982127  487084 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key.be52f8f7
	I1207 23:11:58.982147  487084 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt.be52f8f7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1207 23:11:59.053446  487084 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt.be52f8f7 ...
	I1207 23:11:59.053484  487084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt.be52f8f7: {Name:mkde9a77ed2ccf374bbd7ef2ab8471222e930ca7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:59.053683  487084 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key.be52f8f7 ...
	I1207 23:11:59.053700  487084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key.be52f8f7: {Name:mkf9f5e1f2966de715814128c39c83c05472c22e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:59.053837  487084 certs.go:382] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt.be52f8f7 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt
	I1207 23:11:59.054023  487084 certs.go:386] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key.be52f8f7 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key
	I1207 23:11:59.054208  487084 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key
	I1207 23:11:59.054223  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1207 23:11:59.054240  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1207 23:11:59.054254  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1207 23:11:59.054268  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1207 23:11:59.054285  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1207 23:11:59.054298  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1207 23:11:59.054315  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1207 23:11:59.054346  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1207 23:11:59.054449  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:11:59.054492  487084 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:11:59.054503  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:11:59.054539  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:11:59.054597  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:11:59.054627  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:11:59.054683  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:11:59.054723  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem -> /usr/share/ca-certificates/393125.pem
	I1207 23:11:59.054754  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /usr/share/ca-certificates/3931252.pem
	I1207 23:11:59.054767  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:11:59.055522  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:11:59.076096  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:11:59.092913  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:11:59.110126  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:11:59.126855  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1207 23:11:59.143407  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 23:11:59.160896  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:11:59.178517  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 23:11:59.196273  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:11:59.213156  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:11:59.230319  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:11:59.247989  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:11:59.259981  487084 ssh_runner.go:195] Run: openssl version
	I1207 23:11:59.265807  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:11:59.273185  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:11:59.280496  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:11:59.284023  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:11:59.284068  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:11:59.318047  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:11:59.325928  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:11:59.332951  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:11:59.340016  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:11:59.343716  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:11:59.343772  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:11:59.377866  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:11:59.386064  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:11:59.393852  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:11:59.401598  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:11:59.405548  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:11:59.405622  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:11:59.439621  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:11:59.447485  487084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:11:59.451341  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 23:11:59.493084  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 23:11:59.535906  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 23:11:59.583567  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 23:11:59.642172  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 23:11:59.681845  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 23:11:59.717892  487084 kubeadm.go:401] StartCluster: {Name:ha-907658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:11:59.718040  487084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:11:59.718122  487084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:11:59.750509  487084 cri.go:89] found id: "86601d9f6ba07c5cc957fcd84ee14c9ed14e0f86e2c332659c8fd9ca9c473cdd"
	I1207 23:11:59.750537  487084 cri.go:89] found id: "3102169518f14fb026edc01e1247ff4c2edc1292fb8d6ddab3310dc29262b65d"
	I1207 23:11:59.750543  487084 cri.go:89] found id: "87abab3f9975c7d1ffa51c90a94a832599db31aa8d9e2e4cdcccfa593c87020f"
	I1207 23:11:59.750548  487084 cri.go:89] found id: "db1d97b6874004dcfa1bfc301e8470ac6e8ab810f5002178c4d64e0899af2340"
	I1207 23:11:59.750560  487084 cri.go:89] found id: "04ab6dc0a72c2fd9ce998abf808c8139e9d16737d96e3dc5573726403cfba770"
	I1207 23:11:59.750567  487084 cri.go:89] found id: ""
	I1207 23:11:59.750620  487084 ssh_runner.go:195] Run: sudo runc list -f json
	W1207 23:11:59.763116  487084 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:11:59Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:11:59.763191  487084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:11:59.771453  487084 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1207 23:11:59.771471  487084 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1207 23:11:59.771524  487084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 23:11:59.778977  487084 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:11:59.779462  487084 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-907658" does not appear in /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:11:59.779590  487084 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-389542/kubeconfig needs updating (will repair): [kubeconfig missing "ha-907658" cluster setting kubeconfig missing "ha-907658" context setting]
	I1207 23:11:59.780044  487084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:59.780730  487084 kapi.go:59] client config for ha-907658: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key", CAFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 23:11:59.781268  487084 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1207 23:11:59.781286  487084 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1207 23:11:59.781293  487084 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1207 23:11:59.781300  487084 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1207 23:11:59.781318  487084 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1207 23:11:59.781314  487084 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1207 23:11:59.781841  487084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 23:11:59.790236  487084 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1207 23:11:59.790262  487084 kubeadm.go:602] duration metric: took 18.784379ms to restartPrimaryControlPlane
	I1207 23:11:59.790272  487084 kubeadm.go:403] duration metric: took 72.393488ms to StartCluster
	I1207 23:11:59.790292  487084 settings.go:142] acquiring lock: {Name:mk372e79badb9c8f25216fa891cff6dfa96ea2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:59.790408  487084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:11:59.791175  487084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:11:59.791433  487084 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:11:59.791463  487084 start.go:242] waiting for startup goroutines ...
	I1207 23:11:59.791480  487084 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1207 23:11:59.791743  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:11:59.794127  487084 out.go:179] * Enabled addons: 
	I1207 23:11:59.795136  487084 addons.go:530] duration metric: took 3.661252ms for enable addons: enabled=[]
	I1207 23:11:59.795167  487084 start.go:247] waiting for cluster config update ...
	I1207 23:11:59.795178  487084 start.go:256] writing updated cluster config ...
	I1207 23:11:59.796468  487084 out.go:203] 
	I1207 23:11:59.797620  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:11:59.797739  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:11:59.799011  487084 out.go:179] * Starting "ha-907658-m02" control-plane node in "ha-907658" cluster
	I1207 23:11:59.799852  487084 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:11:59.800858  487084 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:11:59.801718  487084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:11:59.801733  487084 cache.go:65] Caching tarball of preloaded images
	I1207 23:11:59.801784  487084 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:11:59.801821  487084 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:11:59.801834  487084 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1207 23:11:59.801944  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:11:59.823527  487084 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:11:59.823550  487084 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:11:59.823570  487084 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:11:59.823603  487084 start.go:360] acquireMachinesLock for ha-907658-m02: {Name:mk6484dd4dfe7ba137d5f583543a1831d27edba5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:11:59.823673  487084 start.go:364] duration metric: took 49.067µs to acquireMachinesLock for "ha-907658-m02"
	I1207 23:11:59.823696  487084 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:11:59.823702  487084 fix.go:54] fixHost starting: m02
	I1207 23:11:59.823927  487084 cli_runner.go:164] Run: docker container inspect ha-907658-m02 --format={{.State.Status}}
	I1207 23:11:59.844560  487084 fix.go:112] recreateIfNeeded on ha-907658-m02: state=Stopped err=<nil>
	W1207 23:11:59.844589  487084 fix.go:138] unexpected machine state, will restart: <nil>
	I1207 23:11:59.846377  487084 out.go:252] * Restarting existing docker container for "ha-907658-m02" ...
	I1207 23:11:59.846453  487084 cli_runner.go:164] Run: docker start ha-907658-m02
	I1207 23:12:00.130224  487084 cli_runner.go:164] Run: docker container inspect ha-907658-m02 --format={{.State.Status}}
	I1207 23:12:00.155491  487084 kic.go:430] container "ha-907658-m02" state is running.
	I1207 23:12:00.155911  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m02
	I1207 23:12:00.178281  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:12:00.178573  487084 machine.go:94] provisionDockerMachine start ...
	I1207 23:12:00.178649  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:00.198614  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:00.198945  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1207 23:12:00.198960  487084 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:12:00.199661  487084 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38884->127.0.0.1:33218: read: connection reset by peer
	I1207 23:12:03.333342  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658-m02
	
	I1207 23:12:03.333382  487084 ubuntu.go:182] provisioning hostname "ha-907658-m02"
	I1207 23:12:03.333446  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:03.352148  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:03.352463  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1207 23:12:03.352484  487084 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-907658-m02 && echo "ha-907658-m02" | sudo tee /etc/hostname
	I1207 23:12:03.505996  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658-m02
	
	I1207 23:12:03.506086  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:03.523096  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:03.523409  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1207 23:12:03.523430  487084 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-907658-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-907658-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-907658-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:12:03.654538  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:12:03.654571  487084 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:12:03.654593  487084 ubuntu.go:190] setting up certificates
	I1207 23:12:03.654607  487084 provision.go:84] configureAuth start
	I1207 23:12:03.654667  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m02
	I1207 23:12:03.678200  487084 provision.go:143] copyHostCerts
	I1207 23:12:03.678248  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:12:03.678285  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:12:03.678297  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:12:03.678397  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:12:03.678500  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:12:03.678535  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:12:03.678546  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:12:03.678587  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:12:03.678657  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:12:03.678682  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:12:03.678690  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:12:03.678715  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:12:03.678770  487084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.ha-907658-m02 san=[127.0.0.1 192.168.49.3 ha-907658-m02 localhost minikube]
	I1207 23:12:03.790264  487084 provision.go:177] copyRemoteCerts
	I1207 23:12:03.790352  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:12:03.790402  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:03.823101  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m02/id_rsa Username:docker}
	I1207 23:12:03.924465  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1207 23:12:03.924539  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:12:03.944485  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1207 23:12:03.944556  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1207 23:12:03.968961  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1207 23:12:03.969036  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 23:12:03.995367  487084 provision.go:87] duration metric: took 340.743667ms to configureAuth
	I1207 23:12:03.995400  487084 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:12:03.995657  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:03.995779  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:04.026533  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:04.026857  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33218 <nil> <nil>}
	I1207 23:12:04.026885  487084 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:12:04.415911  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:12:04.415941  487084 machine.go:97] duration metric: took 4.237351611s to provisionDockerMachine
	I1207 23:12:04.415957  487084 start.go:293] postStartSetup for "ha-907658-m02" (driver="docker")
	I1207 23:12:04.415971  487084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:12:04.416028  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:12:04.416078  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:04.434685  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m02/id_rsa Username:docker}
	I1207 23:12:04.530207  487084 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:12:04.533967  487084 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:12:04.533999  487084 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:12:04.534014  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:12:04.534066  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:12:04.534139  487084 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:12:04.534149  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /etc/ssl/certs/3931252.pem
	I1207 23:12:04.534230  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:12:04.542117  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:12:04.560472  487084 start.go:296] duration metric: took 144.495639ms for postStartSetup
	I1207 23:12:04.560570  487084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:12:04.560625  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:04.577649  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m02/id_rsa Username:docker}
	I1207 23:12:04.669363  487084 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:12:04.674346  487084 fix.go:56] duration metric: took 4.85062394s for fixHost
	I1207 23:12:04.674372  487084 start.go:83] releasing machines lock for "ha-907658-m02", held for 4.850686194s
	I1207 23:12:04.674436  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m02
	I1207 23:12:04.693901  487084 out.go:179] * Found network options:
	I1207 23:12:04.695122  487084 out.go:179]   - NO_PROXY=192.168.49.2
	W1207 23:12:04.696299  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	W1207 23:12:04.696348  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	I1207 23:12:04.696432  487084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:12:04.696482  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:04.696491  487084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:12:04.696545  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m02
	I1207 23:12:04.715832  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m02/id_rsa Username:docker}
	I1207 23:12:04.716229  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33218 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m02/id_rsa Username:docker}
	I1207 23:12:04.880414  487084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:12:04.885363  487084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:12:04.885437  487084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:12:04.893312  487084 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 23:12:04.893347  487084 start.go:496] detecting cgroup driver to use...
	I1207 23:12:04.893386  487084 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:12:04.893433  487084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:12:04.908112  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:12:04.920708  487084 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:12:04.920806  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:12:04.935538  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:12:04.948970  487084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:12:05.093803  487084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:12:05.237498  487084 docker.go:234] disabling docker service ...
	I1207 23:12:05.237578  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:12:05.255362  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:12:05.271477  487084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:12:05.401811  487084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:12:05.532521  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:12:05.547785  487084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:12:05.566033  487084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:12:05.566094  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.577067  487084 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:12:05.577126  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.589050  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.599566  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.609984  487084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:12:05.619430  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.632001  487084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.642199  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:05.652617  487084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:12:05.661297  487084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:12:05.671605  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:05.817088  487084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:12:06.027922  487084 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:12:06.027991  487084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:12:06.032083  487084 start.go:564] Will wait 60s for crictl version
	I1207 23:12:06.032144  487084 ssh_runner.go:195] Run: which crictl
	I1207 23:12:06.035913  487084 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:12:06.060174  487084 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:12:06.060268  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:12:06.088918  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:12:06.119010  487084 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1207 23:12:06.120321  487084 out.go:179]   - env NO_PROXY=192.168.49.2
	I1207 23:12:06.121801  487084 cli_runner.go:164] Run: docker network inspect ha-907658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:12:06.139719  487084 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1207 23:12:06.143993  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:12:06.155217  487084 mustload.go:66] Loading cluster: ha-907658
	I1207 23:12:06.155433  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:06.155653  487084 cli_runner.go:164] Run: docker container inspect ha-907658 --format={{.State.Status}}
	I1207 23:12:06.173920  487084 host.go:66] Checking if "ha-907658" exists ...
	I1207 23:12:06.174154  487084 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658 for IP: 192.168.49.3
	I1207 23:12:06.174165  487084 certs.go:195] generating shared ca certs ...
	I1207 23:12:06.174179  487084 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:12:06.174311  487084 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:12:06.174381  487084 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:12:06.174397  487084 certs.go:257] generating profile certs ...
	I1207 23:12:06.174493  487084 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key
	I1207 23:12:06.174583  487084 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key.39a0badd
	I1207 23:12:06.174639  487084 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key
	I1207 23:12:06.174654  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1207 23:12:06.174671  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1207 23:12:06.174693  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1207 23:12:06.174708  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1207 23:12:06.174722  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1207 23:12:06.174739  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1207 23:12:06.174753  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1207 23:12:06.174772  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1207 23:12:06.174836  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:12:06.174877  487084 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:12:06.174891  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:12:06.174926  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:12:06.174963  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:12:06.174996  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:12:06.175052  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:12:06.175095  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /usr/share/ca-certificates/3931252.pem
	I1207 23:12:06.175115  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:06.175131  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem -> /usr/share/ca-certificates/393125.pem
	I1207 23:12:06.175194  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:12:06.197420  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:12:06.283673  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1207 23:12:06.290449  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1207 23:12:06.302775  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1207 23:12:06.308469  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1207 23:12:06.317835  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1207 23:12:06.321609  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1207 23:12:06.330066  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1207 23:12:06.333816  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1207 23:12:06.345628  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1207 23:12:06.352380  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1207 23:12:06.360869  487084 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1207 23:12:06.364787  487084 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1207 23:12:06.374104  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:12:06.394705  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:12:06.413194  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:12:06.432115  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:12:06.449406  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1207 23:12:06.466917  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 23:12:06.498654  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:12:06.528737  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 23:12:06.546449  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:12:06.564005  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:12:06.582815  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:12:06.601666  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1207 23:12:06.615105  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1207 23:12:06.631379  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1207 23:12:06.646798  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1207 23:12:06.659864  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1207 23:12:06.675256  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1207 23:12:06.690795  487084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1207 23:12:06.705444  487084 ssh_runner.go:195] Run: openssl version
	I1207 23:12:06.712063  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:12:06.720029  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:12:06.728834  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:12:06.733304  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:12:06.733391  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:12:06.771128  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:12:06.779038  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:06.787058  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:12:06.794858  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:06.798600  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:06.798662  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:06.834714  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:12:06.842519  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:12:06.849816  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:12:06.857109  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:12:06.860827  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:12:06.860876  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:12:06.901264  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:12:06.909596  487084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:12:06.913535  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 23:12:06.953706  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 23:12:06.990023  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 23:12:07.024365  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 23:12:07.059478  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 23:12:07.093656  487084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 23:12:07.130433  487084 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1207 23:12:07.130566  487084 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-907658-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:12:07.130596  487084 kube-vip.go:115] generating kube-vip config ...
	I1207 23:12:07.130647  487084 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1207 23:12:07.142960  487084 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:12:07.143037  487084 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1207 23:12:07.143109  487084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:12:07.151538  487084 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:12:07.151608  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1207 23:12:07.159652  487084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1207 23:12:07.172062  487084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:12:07.184591  487084 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1207 23:12:07.197988  487084 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1207 23:12:07.201949  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:12:07.212295  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:07.335873  487084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:12:07.349280  487084 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:12:07.349636  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:07.351992  487084 out.go:179] * Verifying Kubernetes components...
	I1207 23:12:07.353164  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:07.482271  487084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:12:07.495426  487084 kapi.go:59] client config for ha-907658: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key", CAFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1207 23:12:07.495497  487084 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1207 23:12:07.495703  487084 node_ready.go:35] waiting up to 6m0s for node "ha-907658-m02" to be "Ready" ...
	I1207 23:12:07.504809  487084 node_ready.go:49] node "ha-907658-m02" is "Ready"
	I1207 23:12:07.504835  487084 node_ready.go:38] duration metric: took 9.118175ms for node "ha-907658-m02" to be "Ready" ...
	I1207 23:12:07.504849  487084 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:12:07.504891  487084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:12:07.517382  487084 api_server.go:72] duration metric: took 168.030727ms to wait for apiserver process to appear ...
	I1207 23:12:07.517409  487084 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:12:07.517436  487084 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1207 23:12:07.523117  487084 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1207 23:12:07.524187  487084 api_server.go:141] control plane version: v1.34.2
	I1207 23:12:07.524214  487084 api_server.go:131] duration metric: took 6.79771ms to wait for apiserver health ...
	I1207 23:12:07.524224  487084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:12:07.530960  487084 system_pods.go:59] 26 kube-system pods found
	I1207 23:12:07.531007  487084 system_pods.go:61] "coredns-66bc5c9577-7lkd8" [87d8dbef-c05d-4fcd-b08e-4ee6bce689ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:12:07.531030  487084 system_pods.go:61] "coredns-66bc5c9577-j9lqh" [50fb7869-af19-4fe4-a49d-bf8431faa47e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:12:07.531045  487084 system_pods.go:61] "etcd-ha-907658" [a1045f46-63e5-4adf-8cba-698626661685] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:12:07.531055  487084 system_pods.go:61] "etcd-ha-907658-m02" [e0fd4196-c559-4ed5-a866-f2edca5d028b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:12:07.531065  487084 system_pods.go:61] "etcd-ha-907658-m03" [ec660b37-46e0-4ea6-8143-43a215cb208e] Running
	I1207 23:12:07.531077  487084 system_pods.go:61] "kindnet-5lg58" [595946fb-4b57-4869-85e2-75debf3486ae] Running
	I1207 23:12:07.531082  487084 system_pods.go:61] "kindnet-9rqhs" [78003a20-15f9-43e0-8a11-9c215ade326b] Running
	I1207 23:12:07.531086  487084 system_pods.go:61] "kindnet-hzfvq" [8c0ef1d7-39de-46ce-b16b-4d2794e7dc20] Running
	I1207 23:12:07.531090  487084 system_pods.go:61] "kindnet-wvnmz" [464814b4-64d5-4cae-b298-44186fe9b844] Running
	I1207 23:12:07.531102  487084 system_pods.go:61] "kube-apiserver-ha-907658" [746157f2-b5d4-4a22-b0d0-e186dba5c022] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:12:07.531114  487084 system_pods.go:61] "kube-apiserver-ha-907658-m02" [69e1f1f9-cc80-4383-8bf2-cd362ab2fc9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:12:07.531122  487084 system_pods.go:61] "kube-apiserver-ha-907658-m03" [6dd58630-2169-4539-b8eb-d9971aef28c0] Running
	I1207 23:12:07.531128  487084 system_pods.go:61] "kube-controller-manager-ha-907658" [86717111-1edd-4e7d-bd64-87a0b751fd53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:12:07.531132  487084 system_pods.go:61] "kube-controller-manager-ha-907658-m02" [2edf59bb-e62d-4897-9d2f-6a454cc72644] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:12:07.531138  487084 system_pods.go:61] "kube-controller-manager-ha-907658-m03" [87b33e73-dedd-477d-87fa-42e198df84ba] Running
	I1207 23:12:07.531141  487084 system_pods.go:61] "kube-proxy-8fwsf" [1d7267ee-074b-40da-bfe0-4b434d732d8c] Running
	I1207 23:12:07.531147  487084 system_pods.go:61] "kube-proxy-b8vz9" [cd4b68a6-4528-4644-bac6-158d1bffd0ed] Running
	I1207 23:12:07.531150  487084 system_pods.go:61] "kube-proxy-r5c77" [c0ba957f-b2b5-4e7a-b93a-b3619c1e4cf9] Running
	I1207 23:12:07.531153  487084 system_pods.go:61] "kube-proxy-sdhd8" [55e62bf1-af57-4c34-925a-c44c47ce32ce] Running
	I1207 23:12:07.531157  487084 system_pods.go:61] "kube-scheduler-ha-907658" [16a4e936-d293-4107-b559-200f764f7dd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:12:07.531164  487084 system_pods.go:61] "kube-scheduler-ha-907658-m02" [85e3e5a5-fe1f-4994-90d4-c4e42a5a887f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:12:07.531175  487084 system_pods.go:61] "kube-scheduler-ha-907658-m03" [ca765146-fd0b-4cc8-9f6e-55e2601a5033] Running
	I1207 23:12:07.531178  487084 system_pods.go:61] "kube-vip-ha-907658" [2fc8fc0b-3f23-44d1-909a-20f06169c8dd] Running
	I1207 23:12:07.531181  487084 system_pods.go:61] "kube-vip-ha-907658-m02" [53a8762d-c686-486f-9814-2f40e4ff3306] Running
	I1207 23:12:07.531184  487084 system_pods.go:61] "kube-vip-ha-907658-m03" [6bc4a730-7a65-43a8-a746-2bc3ffa9ccc8] Running
	I1207 23:12:07.531186  487084 system_pods.go:61] "storage-provisioner" [5e80f8de-afe9-4c94-997c-c06f5ff985db] Running
	I1207 23:12:07.531192  487084 system_pods.go:74] duration metric: took 6.96154ms to wait for pod list to return data ...
	I1207 23:12:07.531202  487084 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:12:07.533477  487084 default_sa.go:45] found service account: "default"
	I1207 23:12:07.533501  487084 default_sa.go:55] duration metric: took 2.292892ms for default service account to be created ...
	I1207 23:12:07.533508  487084 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 23:12:07.539025  487084 system_pods.go:86] 26 kube-system pods found
	I1207 23:12:07.539051  487084 system_pods.go:89] "coredns-66bc5c9577-7lkd8" [87d8dbef-c05d-4fcd-b08e-4ee6bce689ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:12:07.539059  487084 system_pods.go:89] "coredns-66bc5c9577-j9lqh" [50fb7869-af19-4fe4-a49d-bf8431faa47e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:12:07.539067  487084 system_pods.go:89] "etcd-ha-907658" [a1045f46-63e5-4adf-8cba-698626661685] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:12:07.539072  487084 system_pods.go:89] "etcd-ha-907658-m02" [e0fd4196-c559-4ed5-a866-f2edca5d028b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:12:07.539076  487084 system_pods.go:89] "etcd-ha-907658-m03" [ec660b37-46e0-4ea6-8143-43a215cb208e] Running
	I1207 23:12:07.539080  487084 system_pods.go:89] "kindnet-5lg58" [595946fb-4b57-4869-85e2-75debf3486ae] Running
	I1207 23:12:07.539083  487084 system_pods.go:89] "kindnet-9rqhs" [78003a20-15f9-43e0-8a11-9c215ade326b] Running
	I1207 23:12:07.539087  487084 system_pods.go:89] "kindnet-hzfvq" [8c0ef1d7-39de-46ce-b16b-4d2794e7dc20] Running
	I1207 23:12:07.539090  487084 system_pods.go:89] "kindnet-wvnmz" [464814b4-64d5-4cae-b298-44186fe9b844] Running
	I1207 23:12:07.539097  487084 system_pods.go:89] "kube-apiserver-ha-907658" [746157f2-b5d4-4a22-b0d0-e186dba5c022] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:12:07.539105  487084 system_pods.go:89] "kube-apiserver-ha-907658-m02" [69e1f1f9-cc80-4383-8bf2-cd362ab2fc9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:12:07.539109  487084 system_pods.go:89] "kube-apiserver-ha-907658-m03" [6dd58630-2169-4539-b8eb-d9971aef28c0] Running
	I1207 23:12:07.539118  487084 system_pods.go:89] "kube-controller-manager-ha-907658" [86717111-1edd-4e7d-bd64-87a0b751fd53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:12:07.539123  487084 system_pods.go:89] "kube-controller-manager-ha-907658-m02" [2edf59bb-e62d-4897-9d2f-6a454cc72644] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:12:07.539127  487084 system_pods.go:89] "kube-controller-manager-ha-907658-m03" [87b33e73-dedd-477d-87fa-42e198df84ba] Running
	I1207 23:12:07.539130  487084 system_pods.go:89] "kube-proxy-8fwsf" [1d7267ee-074b-40da-bfe0-4b434d732d8c] Running
	I1207 23:12:07.539139  487084 system_pods.go:89] "kube-proxy-b8vz9" [cd4b68a6-4528-4644-bac6-158d1bffd0ed] Running
	I1207 23:12:07.539144  487084 system_pods.go:89] "kube-proxy-r5c77" [c0ba957f-b2b5-4e7a-b93a-b3619c1e4cf9] Running
	I1207 23:12:07.539153  487084 system_pods.go:89] "kube-proxy-sdhd8" [55e62bf1-af57-4c34-925a-c44c47ce32ce] Running
	I1207 23:12:07.539159  487084 system_pods.go:89] "kube-scheduler-ha-907658" [16a4e936-d293-4107-b559-200f764f7dd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:12:07.539164  487084 system_pods.go:89] "kube-scheduler-ha-907658-m02" [85e3e5a5-fe1f-4994-90d4-c4e42a5a887f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:12:07.539167  487084 system_pods.go:89] "kube-scheduler-ha-907658-m03" [ca765146-fd0b-4cc8-9f6e-55e2601a5033] Running
	I1207 23:12:07.539171  487084 system_pods.go:89] "kube-vip-ha-907658" [2fc8fc0b-3f23-44d1-909a-20f06169c8dd] Running
	I1207 23:12:07.539174  487084 system_pods.go:89] "kube-vip-ha-907658-m02" [53a8762d-c686-486f-9814-2f40e4ff3306] Running
	I1207 23:12:07.539176  487084 system_pods.go:89] "kube-vip-ha-907658-m03" [6bc4a730-7a65-43a8-a746-2bc3ffa9ccc8] Running
	I1207 23:12:07.539181  487084 system_pods.go:89] "storage-provisioner" [5e80f8de-afe9-4c94-997c-c06f5ff985db] Running
	I1207 23:12:07.539191  487084 system_pods.go:126] duration metric: took 5.677775ms to wait for k8s-apps to be running ...
	I1207 23:12:07.539200  487084 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:12:07.539244  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:12:07.552415  487084 system_svc.go:56] duration metric: took 13.204195ms WaitForService to wait for kubelet
	I1207 23:12:07.552445  487084 kubeadm.go:587] duration metric: took 203.099861ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:12:07.552461  487084 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:12:07.556717  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:07.556763  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:07.556789  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:07.556794  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:07.556800  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:07.556804  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:07.556815  487084 node_conditions.go:105] duration metric: took 4.343663ms to run NodePressure ...
	I1207 23:12:07.556830  487084 start.go:242] waiting for startup goroutines ...
	I1207 23:12:07.556864  487084 start.go:256] writing updated cluster config ...
	I1207 23:12:07.559024  487084 out.go:203] 
	I1207 23:12:07.560420  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:07.560527  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:12:07.562073  487084 out.go:179] * Starting "ha-907658-m04" worker node in "ha-907658" cluster
	I1207 23:12:07.563315  487084 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:12:07.564547  487084 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:12:07.565586  487084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:12:07.565600  487084 cache.go:65] Caching tarball of preloaded images
	I1207 23:12:07.565653  487084 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:12:07.565684  487084 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:12:07.565695  487084 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1207 23:12:07.565787  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:12:07.585455  487084 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:12:07.585473  487084 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:12:07.585488  487084 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:12:07.585525  487084 start.go:360] acquireMachinesLock for ha-907658-m04: {Name:mkbf928fa5c7c7d65c3e97ec1b1d2c403a4aafbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:12:07.585593  487084 start.go:364] duration metric: took 46.24µs to acquireMachinesLock for "ha-907658-m04"
	I1207 23:12:07.585618  487084 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:12:07.585630  487084 fix.go:54] fixHost starting: m04
	I1207 23:12:07.585905  487084 cli_runner.go:164] Run: docker container inspect ha-907658-m04 --format={{.State.Status}}
	I1207 23:12:07.603987  487084 fix.go:112] recreateIfNeeded on ha-907658-m04: state=Stopped err=<nil>
	W1207 23:12:07.604014  487084 fix.go:138] unexpected machine state, will restart: <nil>
	I1207 23:12:07.605765  487084 out.go:252] * Restarting existing docker container for "ha-907658-m04" ...
	I1207 23:12:07.605839  487084 cli_runner.go:164] Run: docker start ha-907658-m04
	I1207 23:12:07.853178  487084 cli_runner.go:164] Run: docker container inspect ha-907658-m04 --format={{.State.Status}}
	I1207 23:12:07.874755  487084 kic.go:430] container "ha-907658-m04" state is running.
	I1207 23:12:07.875212  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m04
	I1207 23:12:07.896653  487084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/config.json ...
	I1207 23:12:07.897024  487084 machine.go:94] provisionDockerMachine start ...
	I1207 23:12:07.897151  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:07.918923  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:07.919195  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1207 23:12:07.919216  487084 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:12:07.919824  487084 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49894->127.0.0.1:33223: read: connection reset by peer
	I1207 23:12:11.048469  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658-m04
	
	I1207 23:12:11.048499  487084 ubuntu.go:182] provisioning hostname "ha-907658-m04"
	I1207 23:12:11.048563  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:11.066447  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:11.066738  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1207 23:12:11.066753  487084 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-907658-m04 && echo "ha-907658-m04" | sudo tee /etc/hostname
	I1207 23:12:11.206276  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-907658-m04
	
	I1207 23:12:11.206388  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:11.225667  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:11.225909  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1207 23:12:11.225925  487084 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-907658-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-907658-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-907658-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:12:11.355703  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:12:11.355747  487084 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:12:11.355789  487084 ubuntu.go:190] setting up certificates
	I1207 23:12:11.355803  487084 provision.go:84] configureAuth start
	I1207 23:12:11.355885  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m04
	I1207 23:12:11.374837  487084 provision.go:143] copyHostCerts
	I1207 23:12:11.374879  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:12:11.374918  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:12:11.374932  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:12:11.375021  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:12:11.375125  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:12:11.375155  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:12:11.375165  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:12:11.375205  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:12:11.375256  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:12:11.375278  487084 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:12:11.375284  487084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:12:11.375321  487084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:12:11.375435  487084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.ha-907658-m04 san=[127.0.0.1 192.168.49.5 ha-907658-m04 localhost minikube]
	I1207 23:12:11.430934  487084 provision.go:177] copyRemoteCerts
	I1207 23:12:11.431006  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:12:11.431063  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:11.449187  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m04/id_rsa Username:docker}
	I1207 23:12:11.543515  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1207 23:12:11.543582  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1207 23:12:11.562188  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1207 23:12:11.562249  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 23:12:11.579970  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1207 23:12:11.580024  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:12:11.597607  487084 provision.go:87] duration metric: took 241.785948ms to configureAuth
	I1207 23:12:11.597642  487084 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:12:11.597863  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:11.597964  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:11.616041  487084 main.go:143] libmachine: Using SSH client type: native
	I1207 23:12:11.616267  487084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1207 23:12:11.616282  487084 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:12:11.900554  487084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:12:11.900587  487084 machine.go:97] duration metric: took 4.00354246s to provisionDockerMachine
	I1207 23:12:11.900600  487084 start.go:293] postStartSetup for "ha-907658-m04" (driver="docker")
	I1207 23:12:11.900611  487084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:12:11.900667  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:12:11.900705  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:11.919920  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m04/id_rsa Username:docker}
	I1207 23:12:12.015993  487084 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:12:12.019664  487084 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:12:12.019701  487084 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:12:12.019713  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:12:12.019773  487084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:12:12.019880  487084 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:12:12.019892  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /etc/ssl/certs/3931252.pem
	I1207 23:12:12.020003  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:12:12.028252  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:12:12.045963  487084 start.go:296] duration metric: took 145.345162ms for postStartSetup
	I1207 23:12:12.046054  487084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:12:12.046100  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:12.064419  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m04/id_rsa Username:docker}
	I1207 23:12:12.155615  487084 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:12:12.160279  487084 fix.go:56] duration metric: took 4.57464273s for fixHost
	I1207 23:12:12.160305  487084 start.go:83] releasing machines lock for "ha-907658-m04", held for 4.574698172s
	I1207 23:12:12.160388  487084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m04
	I1207 23:12:12.180857  487084 out.go:179] * Found network options:
	I1207 23:12:12.182145  487084 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1207 23:12:12.183173  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	W1207 23:12:12.183195  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	W1207 23:12:12.183220  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	W1207 23:12:12.183237  487084 proxy.go:120] fail to check proxy env: Error ip not in block
	I1207 23:12:12.183304  487084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:12:12.183368  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:12.183387  487084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:12:12.183450  487084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:12:12.203407  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m04/id_rsa Username:docker}
	I1207 23:12:12.203844  487084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m04/id_rsa Username:docker}
	I1207 23:12:12.357625  487084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:12:12.362541  487084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:12:12.362619  487084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:12:12.370757  487084 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 23:12:12.370785  487084 start.go:496] detecting cgroup driver to use...
	I1207 23:12:12.370818  487084 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:12:12.370864  487084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:12:12.385478  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:12:12.398446  487084 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:12:12.398518  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:12:12.413312  487084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:12:12.425964  487084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:12:12.508240  487084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:12:12.594377  487084 docker.go:234] disabling docker service ...
	I1207 23:12:12.594469  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:12:12.609287  487084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:12:12.621518  487084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:12:12.706445  487084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:12:12.788828  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:12:12.801567  487084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:12:12.815799  487084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:12:12.815866  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.824631  487084 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:12:12.824701  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.834415  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.843435  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.852233  487084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:12:12.861003  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.870357  487084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.879159  487084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:12:12.888283  487084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:12:12.896022  487084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:12:12.903097  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:12.988157  487084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:12:13.133593  487084 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:12:13.133671  487084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:12:13.137843  487084 start.go:564] Will wait 60s for crictl version
	I1207 23:12:13.137917  487084 ssh_runner.go:195] Run: which crictl
	I1207 23:12:13.141433  487084 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:12:13.167512  487084 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:12:13.167597  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:12:13.199036  487084 ssh_runner.go:195] Run: crio --version
	I1207 23:12:13.229455  487084 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1207 23:12:13.230791  487084 out.go:179]   - env NO_PROXY=192.168.49.2
	I1207 23:12:13.232057  487084 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1207 23:12:13.233540  487084 cli_runner.go:164] Run: docker network inspect ha-907658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:12:13.250726  487084 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1207 23:12:13.254740  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:12:13.265197  487084 mustload.go:66] Loading cluster: ha-907658
	I1207 23:12:13.265455  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:13.265697  487084 cli_runner.go:164] Run: docker container inspect ha-907658 --format={{.State.Status}}
	I1207 23:12:13.284748  487084 host.go:66] Checking if "ha-907658" exists ...
	I1207 23:12:13.285028  487084 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658 for IP: 192.168.49.5
	I1207 23:12:13.285041  487084 certs.go:195] generating shared ca certs ...
	I1207 23:12:13.285056  487084 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:12:13.285200  487084 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:12:13.285261  487084 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:12:13.285280  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1207 23:12:13.285300  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1207 23:12:13.285317  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1207 23:12:13.285349  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1207 23:12:13.285417  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:12:13.285460  487084 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:12:13.285474  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:12:13.285512  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:12:13.285554  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:12:13.285592  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:12:13.285658  487084 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:12:13.285698  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:13.285722  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem -> /usr/share/ca-certificates/393125.pem
	I1207 23:12:13.285741  487084 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> /usr/share/ca-certificates/3931252.pem
	I1207 23:12:13.285769  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:12:13.304120  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:12:13.322222  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:12:13.340050  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:12:13.357784  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:12:13.376383  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:12:13.395635  487084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:12:13.413473  487084 ssh_runner.go:195] Run: openssl version
	I1207 23:12:13.419754  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:13.427021  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:12:13.434993  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:13.439202  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:13.439267  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:12:13.473339  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:12:13.481399  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:12:13.488584  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:12:13.495734  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:12:13.499349  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:12:13.499394  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:12:13.534119  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:12:13.542358  487084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:12:13.550110  487084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:12:13.557923  487084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:12:13.561771  487084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:12:13.561821  487084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:12:13.600731  487084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:12:13.608915  487084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:12:13.612836  487084 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1207 23:12:13.612892  487084 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.2  false true} ...
	I1207 23:12:13.613000  487084 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-907658-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-907658 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:12:13.613071  487084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:12:13.620905  487084 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:12:13.620964  487084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1207 23:12:13.628840  487084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1207 23:12:13.642519  487084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:12:13.655821  487084 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1207 23:12:13.660403  487084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:12:13.672258  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:13.756400  487084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:12:13.769720  487084 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1207 23:12:13.770008  487084 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:12:13.772651  487084 out.go:179] * Verifying Kubernetes components...
	I1207 23:12:13.773857  487084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:12:13.857293  487084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:12:13.870886  487084 kapi.go:59] client config for ha-907658: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key", CAFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1207 23:12:13.870958  487084 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1207 23:12:13.871160  487084 node_ready.go:35] waiting up to 6m0s for node "ha-907658-m04" to be "Ready" ...
	I1207 23:12:13.874196  487084 node_ready.go:49] node "ha-907658-m04" is "Ready"
	I1207 23:12:13.874220  487084 node_ready.go:38] duration metric: took 3.046821ms for node "ha-907658-m04" to be "Ready" ...
	I1207 23:12:13.874233  487084 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:12:13.874273  487084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:12:13.886840  487084 system_svc.go:56] duration metric: took 12.598168ms WaitForService to wait for kubelet
	I1207 23:12:13.886868  487084 kubeadm.go:587] duration metric: took 117.090427ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:12:13.886885  487084 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:12:13.890337  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:13.890362  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:13.890375  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:13.890380  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:13.890386  487084 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:12:13.890392  487084 node_conditions.go:123] node cpu capacity is 8
	I1207 23:12:13.890400  487084 node_conditions.go:105] duration metric: took 3.509832ms to run NodePressure ...
	I1207 23:12:13.890416  487084 start.go:242] waiting for startup goroutines ...
	I1207 23:12:13.890446  487084 start.go:256] writing updated cluster config ...
	I1207 23:12:13.890792  487084 ssh_runner.go:195] Run: rm -f paused
	I1207 23:12:13.894562  487084 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:12:13.895171  487084 kapi.go:59] client config for ha-907658: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/profiles/ha-907658/client.key", CAFile:"/home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 23:12:13.903646  487084 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7lkd8" in "kube-system" namespace to be "Ready" or be gone ...
	W1207 23:12:15.910233  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:17.910533  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:20.410624  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:22.909833  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:25.410696  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:27.909729  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:29.911016  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:32.410597  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:34.410833  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:36.909456  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:38.911942  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:41.410807  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:43.910363  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:46.411526  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:48.911050  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:51.412217  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:53.910759  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:56.410211  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:12:58.410607  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:00.411373  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:02.910918  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:05.409687  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:07.409957  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:09.910681  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:12.410492  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:14.410764  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:16.909949  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:18.910470  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:20.911090  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:23.410279  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:25.910548  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:27.910666  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:30.410084  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:32.410161  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:34.411051  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:36.910027  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:39.410570  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:41.909517  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:43.910651  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:46.409768  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:48.410760  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:50.910511  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:52.910970  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:55.410193  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:57.410684  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:13:59.911085  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:01.911298  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:04.410828  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:06.910004  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:08.910803  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:11.410260  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:13.410549  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:15.911180  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:18.410236  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:20.910248  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:23.410312  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:25.909481  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:27.910308  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:29.910475  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:32.410112  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:34.910739  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:37.410174  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:39.410772  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:41.910812  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:44.409997  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:46.410369  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:48.910126  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:50.910698  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:53.410089  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:55.410604  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:57.910049  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:14:59.910503  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:02.409755  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:04.909540  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:06.910504  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:09.409997  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:11.411142  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:13.910274  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:16.410995  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:18.909895  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:20.909974  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:22.910657  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:25.410074  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:27.410196  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:29.410456  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:31.910828  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:34.410231  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:36.410432  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:38.909644  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:40.910092  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:42.910856  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:45.409802  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:47.410082  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:49.410149  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:51.910490  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:54.409927  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:56.410532  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:15:58.909671  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:00.910288  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:02.910545  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:05.410175  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:07.909887  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:09.910041  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	W1207 23:16:11.910457  487084 pod_ready.go:104] pod "coredns-66bc5c9577-7lkd8" is not "Ready", error: <nil>
	I1207 23:16:13.895206  487084 pod_ready.go:86] duration metric: took 3m59.991503796s for pod "coredns-66bc5c9577-7lkd8" in "kube-system" namespace to be "Ready" or be gone ...
	W1207 23:16:13.895245  487084 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-dns" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1207 23:16:13.895263  487084 pod_ready.go:40] duration metric: took 4m0.000670566s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:16:13.897256  487084 out.go:203] 
	W1207 23:16:13.898559  487084 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1207 23:16:13.899846  487084 out.go:203] 
	
	
	==> CRI-O <==
	Dec 07 23:12:03 ha-907658 crio[574]: time="2025-12-07T23:12:03.419320979Z" level=info msg="Started container" PID=1066 containerID=59632406be56295008167128b06b3d246e8cb935a790ce61ab27d7c9a0210c7a description=default/busybox-7b57f96db7-wts8f/busybox id=7b19c8e0-1b80-4d6a-a660-59d86bda3787 name=/runtime.v1.RuntimeService/StartContainer sandboxID=974bf02e23133aac017f3d339f396c28ca8b3d88a654f87bb690e5359126f72a
	Dec 07 23:12:03 ha-907658 crio[574]: time="2025-12-07T23:12:03.42219102Z" level=info msg="Created container b66756d6bf8454e51e71c9a010e9f000c2d6f65f4202832cc7a3a3bf546e9566: kube-system/kube-proxy-r5c77/kube-proxy" id=6c2d44d8-af9b-488e-a8fa-96cfda6ad07e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:12:03 ha-907658 crio[574]: time="2025-12-07T23:12:03.422764701Z" level=info msg="Starting container: b66756d6bf8454e51e71c9a010e9f000c2d6f65f4202832cc7a3a3bf546e9566" id=f4e610f6-9234-460c-ab15-e7f9e1e22236 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:12:03 ha-907658 crio[574]: time="2025-12-07T23:12:03.423187163Z" level=info msg="Created container c6e4a88e898128e18b3156f394f70fd2b7676c0a3014577d38064cdc4c08e233: default/busybox-7b57f96db7-dslrx/busybox" id=947f78d0-ea74-4827-abe4-b36a0b7703f5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:12:03 ha-907658 crio[574]: time="2025-12-07T23:12:03.423803868Z" level=info msg="Starting container: c6e4a88e898128e18b3156f394f70fd2b7676c0a3014577d38064cdc4c08e233" id=f8c5be5c-7fca-4d32-8a6c-68008559df07 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:12:03 ha-907658 crio[574]: time="2025-12-07T23:12:03.425692066Z" level=info msg="Started container" PID=1071 containerID=c6e4a88e898128e18b3156f394f70fd2b7676c0a3014577d38064cdc4c08e233 description=default/busybox-7b57f96db7-dslrx/busybox id=f8c5be5c-7fca-4d32-8a6c-68008559df07 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fee9745be2801cab826368bca687acad119bd0bddcf3bddfe083e1bc37ec0a2e
	Dec 07 23:12:03 ha-907658 crio[574]: time="2025-12-07T23:12:03.425952275Z" level=info msg="Started container" PID=1065 containerID=b66756d6bf8454e51e71c9a010e9f000c2d6f65f4202832cc7a3a3bf546e9566 description=kube-system/kube-proxy-r5c77/kube-proxy id=f4e610f6-9234-460c-ab15-e7f9e1e22236 name=/runtime.v1.RuntimeService/StartContainer sandboxID=81d062f869179dcf8073b42df610726a49898283cc3b7b1c4382936f244009bc
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.828232315Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.832561313Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.832595738Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.832614781Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.836515238Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.836547213Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.836564322Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.840132316Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.840156246Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.840172174Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.844126033Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.844147287Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.8441679Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.847881335Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.84790256Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.847918681Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.851426018Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 07 23:12:13 ha-907658 crio[574]: time="2025-12-07T23:12:13.851446887Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	c6e4a88e89812       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   5 minutes ago       Running             busybox                   2                   fee9745be2801       busybox-7b57f96db7-dslrx            default
	59632406be562       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   5 minutes ago       Running             busybox                   2                   974bf02e23133       busybox-7b57f96db7-wts8f            default
	b66756d6bf845       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   5 minutes ago       Running             kube-proxy                0                   81d062f869179       kube-proxy-r5c77                    kube-system
	6e24622fde46e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 minutes ago       Running             kindnet-cni               0                   91e6c1a0bfdf0       kindnet-hzfvq                       kube-system
	86601d9f6ba07       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   5 minutes ago       Running             kube-controller-manager   0                   b67664be25ec4       kube-controller-manager-ha-907658   kube-system
	3102169518f14       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   5 minutes ago       Running             etcd                      0                   54905301bb684       etcd-ha-907658                      kube-system
	87abab3f9975c       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   5 minutes ago       Running             kube-apiserver            0                   56a831ff3eb23       kube-apiserver-ha-907658            kube-system
	db1d97b687400       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   5 minutes ago       Running             kube-scheduler            0                   cae40eeeedff8       kube-scheduler-ha-907658            kube-system
	04ab6dc0a72c2       6a2e30457bbed0ffdc161ff0131dfcfe9099692717f3d1bcae88b9db3d5a033c   5 minutes ago       Running             kube-vip                  0                   a3d8fbda9f509       kube-vip-ha-907658                  kube-system
	
	
	==> describe nodes <==
	Name:               ha-907658
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-907658
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=ha-907658
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_06_42_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:06:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-907658
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:17:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:17:08 +0000   Sun, 07 Dec 2025 23:06:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:17:08 +0000   Sun, 07 Dec 2025 23:06:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:17:08 +0000   Sun, 07 Dec 2025 23:06:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:17:08 +0000   Sun, 07 Dec 2025 23:07:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-907658
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                f44bac47-757c-4c31-8a75-ef9ebb40422e
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-dslrx             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m7s
	  default                     busybox-7b57f96db7-wts8f             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m7s
	  kube-system                 etcd-ha-907658                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-hzfvq                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-907658             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-907658    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-r5c77                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-907658             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-907658                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 5m15s                  kube-proxy       
	  Normal  Starting                 6m41s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node ha-907658 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node ha-907658 status is now: NodeHasSufficientMemory
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node ha-907658 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                    node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  NodeReady                9m51s                  kubelet          Node ha-907658 status is now: NodeReady
	  Normal  RegisteredNode           9m42s                  node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  RegisteredNode           7m53s                  node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  Starting                 6m57s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    6m56s (x8 over 6m57s)  kubelet          Node ha-907658 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  6m56s (x8 over 6m57s)  kubelet          Node ha-907658 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     6m56s (x8 over 6m57s)  kubelet          Node ha-907658 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m41s                  node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  RegisteredNode           6m41s                  node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  RegisteredNode           6m37s                  node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  Starting                 5m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m20s (x8 over 5m20s)  kubelet          Node ha-907658 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m20s (x8 over 5m20s)  kubelet          Node ha-907658 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m20s (x8 over 5m20s)  kubelet          Node ha-907658 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m13s                  node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  RegisteredNode           5m13s                  node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	  Normal  RegisteredNode           34s                    node-controller  Node ha-907658 event: Registered Node ha-907658 in Controller
	
	
	Name:               ha-907658-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-907658-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=ha-907658
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_07T23_07_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:07:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-907658-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:17:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:16:28 +0000   Sun, 07 Dec 2025 23:07:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:16:28 +0000   Sun, 07 Dec 2025 23:07:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:16:28 +0000   Sun, 07 Dec 2025 23:07:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:16:28 +0000   Sun, 07 Dec 2025 23:12:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-907658-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                c4423b9c-a5a3-462a-aa6c-dc14a3add1e7
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-sd5gw                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m7s
	  kube-system                 coredns-66bc5c9577-7lkd8                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 coredns-66bc5c9577-j9lqh                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-ha-907658-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-wvnmz                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-907658-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-907658-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-sdhd8                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-907658-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-907658-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 6m42s                  kube-proxy       
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node ha-907658-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-907658-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-907658-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  RegisteredNode           9m42s                  node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  Starting                 7m59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m59s (x8 over 7m59s)  kubelet          Node ha-907658-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m59s (x8 over 7m59s)  kubelet          Node ha-907658-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m59s (x8 over 7m59s)  kubelet          Node ha-907658-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m53s                  node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  NodeHasNoDiskPressure    6m55s (x8 over 6m55s)  kubelet          Node ha-907658-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  6m55s (x8 over 6m55s)  kubelet          Node ha-907658-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     6m55s (x8 over 6m55s)  kubelet          Node ha-907658-m02 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m55s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           6m41s                  node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  RegisteredNode           6m41s                  node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  RegisteredNode           6m37s                  node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  Starting                 5m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m19s (x8 over 5m19s)  kubelet          Node ha-907658-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m19s (x8 over 5m19s)  kubelet          Node ha-907658-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m19s (x8 over 5m19s)  kubelet          Node ha-907658-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m13s                  node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  RegisteredNode           5m13s                  node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	  Normal  RegisteredNode           34s                    node-controller  Node ha-907658-m02 event: Registered Node ha-907658-m02 in Controller
	
	
	Name:               ha-907658-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-907658-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=ha-907658
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_07T23_08_29_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:08:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-907658-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:17:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:17:06 +0000   Sun, 07 Dec 2025 23:08:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:17:06 +0000   Sun, 07 Dec 2025 23:08:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:17:06 +0000   Sun, 07 Dec 2025 23:08:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:17:06 +0000   Sun, 07 Dec 2025 23:08:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-907658-m04
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                f80b86e6-d691-401f-8493-d6f45994affe
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9rqhs       100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m50s
	  kube-system                 kube-proxy-b8vz9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m47s                  kube-proxy       
	  Normal  Starting                 4m43s                  kube-proxy       
	  Normal  Starting                 6m10s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m50s (x3 over 8m50s)  kubelet          Node ha-907658-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m50s (x3 over 8m50s)  kubelet          Node ha-907658-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m50s (x3 over 8m50s)  kubelet          Node ha-907658-m04 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           8m49s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  RegisteredNode           8m47s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  RegisteredNode           8m47s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  NodeReady                8m37s                  kubelet          Node ha-907658-m04 status is now: NodeReady
	  Normal  RegisteredNode           7m53s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  RegisteredNode           6m41s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  RegisteredNode           6m41s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  RegisteredNode           6m37s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  Starting                 6m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m26s (x8 over 6m29s)  kubelet          Node ha-907658-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m26s (x8 over 6m29s)  kubelet          Node ha-907658-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m26s (x8 over 6m29s)  kubelet          Node ha-907658-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m13s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  RegisteredNode           5m13s                  node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	  Normal  Starting                 5m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m8s (x8 over 5m11s)   kubelet          Node ha-907658-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m8s (x8 over 5m11s)   kubelet          Node ha-907658-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m8s (x8 over 5m11s)   kubelet          Node ha-907658-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           34s                    node-controller  Node ha-907658-m04 event: Registered Node ha-907658-m04 in Controller
	
	
	Name:               ha-907658-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-907658-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=ha-907658
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_07T23_16_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:16:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-907658-m05
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:17:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:17:15 +0000   Sun, 07 Dec 2025 23:16:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:17:15 +0000   Sun, 07 Dec 2025 23:16:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:17:15 +0000   Sun, 07 Dec 2025 23:16:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:17:15 +0000   Sun, 07 Dec 2025 23:17:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.6
	  Hostname:    ha-907658-m05
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                2f59d6d5-0b04-42ae-a87a-c3aaa091a87e
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-907658-m05                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-9bldj                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-ha-907658-m05             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-ha-907658-m05    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-f5cfv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-ha-907658-m05             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-vip-ha-907658-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        15s   kube-proxy       
	  Normal  RegisteredNode  29s   node-controller  Node ha-907658-m05 event: Registered Node ha-907658-m05 in Controller
	  Normal  RegisteredNode  28s   node-controller  Node ha-907658-m05 event: Registered Node ha-907658-m05 in Controller
	  Normal  RegisteredNode  28s   node-controller  Node ha-907658-m05 event: Registered Node ha-907658-m05 in Controller
	
	
	==> dmesg <==
	[  +0.006693] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[Dec 7 23:17] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007029] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494411] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006545] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493844] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006319] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.495443] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006323] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494714] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006745] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494455] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007157] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493953] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007413] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493695] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007143] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493798] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007702] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493076] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008458] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493060] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008891] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492811] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007996] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	
	
	==> etcd [3102169518f14fb026edc01e1247ff4c2edc1292fb8d6ddab3310dc29262b65d] <==
	{"level":"info","ts":"2025-12-07T23:16:34.521935Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"5984d74cb5b85d3e","stream-type":"stream Message"}
	{"level":"info","ts":"2025-12-07T23:16:34.521977Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5984d74cb5b85d3e"}
	{"level":"info","ts":"2025-12-07T23:16:34.521880Z","caller":"etcdserver/server.go:1854","msg":"sending merged snapshot","from":"aec36adc501070cc","to":"5984d74cb5b85d3e","bytes":6296933,"size":"6.3 MB"}
	{"level":"info","ts":"2025-12-07T23:16:34.522105Z","caller":"rafthttp/snapshot_sender.go:82","msg":"sending database snapshot","snapshot-index":3375,"remote-peer-id":"5984d74cb5b85d3e","bytes":6296933,"size":"6.3 MB"}
	{"level":"info","ts":"2025-12-07T23:16:34.522202Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"5984d74cb5b85d3e","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-07T23:16:34.522229Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5984d74cb5b85d3e"}
	{"level":"info","ts":"2025-12-07T23:16:34.544035Z","caller":"etcdserver/snapshot_merge.go:64","msg":"sent database snapshot to writer","bytes":6287360,"size":"6.3 MB"}
	{"level":"info","ts":"2025-12-07T23:16:34.553513Z","caller":"rafthttp/snapshot_sender.go:131","msg":"sent database snapshot","snapshot-index":3375,"remote-peer-id":"5984d74cb5b85d3e","bytes":6296933,"size":"6.3 MB"}
	{"level":"warn","ts":"2025-12-07T23:16:34.571406Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5984d74cb5b85d3e","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:16:34.572203Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5984d74cb5b85d3e","error":"EOF"}
	{"level":"info","ts":"2025-12-07T23:16:34.584565Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"5984d74cb5b85d3e","stream-type":"stream Message"}
	{"level":"warn","ts":"2025-12-07T23:16:34.584602Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5984d74cb5b85d3e"}
	{"level":"info","ts":"2025-12-07T23:16:34.584614Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5984d74cb5b85d3e"}
	{"level":"info","ts":"2025-12-07T23:16:34.594545Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"5984d74cb5b85d3e","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-12-07T23:16:34.594612Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5984d74cb5b85d3e"}
	{"level":"info","ts":"2025-12-07T23:16:34.594626Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5984d74cb5b85d3e"}
	{"level":"info","ts":"2025-12-07T23:16:34.603259Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5984d74cb5b85d3e"}
	{"level":"info","ts":"2025-12-07T23:16:34.603312Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5984d74cb5b85d3e"}
	{"level":"info","ts":"2025-12-07T23:16:34.935894Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(6450517290767637822 9601273578807870498 12593026477526642892)"}
	{"level":"info","ts":"2025-12-07T23:16:34.936055Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"5984d74cb5b85d3e"}
	{"level":"info","ts":"2025-12-07T23:16:34.936110Z","caller":"etcdserver/server.go:1768","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"5984d74cb5b85d3e"}
	{"level":"info","ts":"2025-12-07T23:16:46.637270Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-07T23:16:48.958165Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-07T23:17:03.718848Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-07T23:17:04.553818Z","caller":"etcdserver/server.go:1872","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"5984d74cb5b85d3e","bytes":6296933,"size":"6.3 MB","took":"30.031932864s"}
	
	
	==> kernel <==
	 23:17:19 up  1:59,  0 user,  load average: 1.13, 1.20, 1.51
	Linux ha-907658 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6e24622fde46e804a62af01a0bc9c1984d71da811c0cb4227298bc171e53fbb1] <==
	I1207 23:16:53.827634       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:16:53.827684       1 main.go:301] handling current node
	I1207 23:16:53.827702       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1207 23:16:53.827707       1 main.go:324] Node ha-907658-m02 has CIDR [10.244.1.0/24] 
	I1207 23:16:53.827891       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1207 23:16:53.827900       1 main.go:324] Node ha-907658-m04 has CIDR [10.244.3.0/24] 
	I1207 23:16:53.827974       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1207 23:16:53.827981       1 main.go:324] Node ha-907658-m05 has CIDR [10.244.2.0/24] 
	I1207 23:16:53.828062       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.49.6 Flags: [] Table: 0 Realm: 0} 
	I1207 23:17:03.828086       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:17:03.828123       1 main.go:301] handling current node
	I1207 23:17:03.828139       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1207 23:17:03.828146       1 main.go:324] Node ha-907658-m02 has CIDR [10.244.1.0/24] 
	I1207 23:17:03.828375       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1207 23:17:03.828392       1 main.go:324] Node ha-907658-m04 has CIDR [10.244.3.0/24] 
	I1207 23:17:03.828496       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1207 23:17:03.828507       1 main.go:324] Node ha-907658-m05 has CIDR [10.244.2.0/24] 
	I1207 23:17:13.829663       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1207 23:17:13.829710       1 main.go:301] handling current node
	I1207 23:17:13.829726       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1207 23:17:13.829730       1 main.go:324] Node ha-907658-m02 has CIDR [10.244.1.0/24] 
	I1207 23:17:13.829933       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1207 23:17:13.829947       1 main.go:324] Node ha-907658-m04 has CIDR [10.244.3.0/24] 
	I1207 23:17:13.830063       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1207 23:17:13.830075       1 main.go:324] Node ha-907658-m05 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [87abab3f9975c7d1ffa51c90a94a832599db31aa8d9e2e4cdcccfa593c87020f] <==
	I1207 23:12:03.040289       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1207 23:12:03.040464       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1207 23:12:03.040505       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1207 23:12:03.040770       1 aggregator.go:171] initial CRD sync complete...
	I1207 23:12:03.040809       1 autoregister_controller.go:144] Starting autoregister controller
	I1207 23:12:03.040832       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1207 23:12:03.040883       1 cache.go:39] Caches are synced for autoregister controller
	I1207 23:12:03.041299       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1207 23:12:03.041943       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1207 23:12:03.042481       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1207 23:12:03.042740       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1207 23:12:03.049189       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1207 23:12:03.051184       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1207 23:12:03.058680       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1207 23:12:03.058715       1 policy_source.go:240] refreshing policies
	E1207 23:12:03.062917       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1207 23:12:03.092652       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 23:12:03.204088       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 23:12:03.945462       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1207 23:12:04.372374       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1207 23:12:04.373818       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 23:12:04.380398       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:12:06.632914       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 23:12:06.742193       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 23:12:06.884554       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [86601d9f6ba07c5cc957fcd84ee14c9ed14e0f86e2c332659c8fd9ca9c473cdd] <==
	E1207 23:12:46.377609       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	E1207 23:12:46.377617       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	E1207 23:12:46.377626       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	E1207 23:12:46.377632       1 gc_controller.go:151] "Failed to get node" err="node \"ha-907658-m03\" not found" logger="pod-garbage-collector-controller" node="ha-907658-m03"
	I1207 23:12:46.388648       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-5lg58"
	I1207 23:12:46.410719       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-5lg58"
	I1207 23:12:46.411071       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-907658-m03"
	I1207 23:12:46.433046       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-907658-m03"
	I1207 23:12:46.433163       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-907658-m03"
	I1207 23:12:46.454493       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-907658-m03"
	I1207 23:12:46.454614       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-8fwsf"
	I1207 23:12:46.480073       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-8fwsf"
	I1207 23:12:46.480362       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-907658-m03"
	I1207 23:12:46.506233       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-907658-m03"
	I1207 23:12:46.506270       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-907658-m03"
	I1207 23:12:46.539150       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-907658-m03"
	I1207 23:12:46.539211       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-907658-m03"
	I1207 23:12:46.557024       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-907658-m03"
	E1207 23:16:46.628574       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-tn6mm failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-tn6mm\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1207 23:16:46.639217       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-tn6mm failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-tn6mm\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1207 23:16:48.059064       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-907658-m04"
	I1207 23:16:48.060058       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-907658-m05\" does not exist"
	I1207 23:16:48.069388       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-907658-m05" podCIDRs=["10.244.2.0/24"]
	I1207 23:16:51.425395       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-907658-m05"
	I1207 23:17:15.655478       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-907658-m04"
	
	
	==> kube-proxy [b66756d6bf8454e51e71c9a010e9f000c2d6f65f4202832cc7a3a3bf546e9566] <==
	I1207 23:12:03.463144       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:12:03.526682       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 23:12:03.627174       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 23:12:03.627210       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1207 23:12:03.627301       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:12:03.644894       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:12:03.644940       1 server_linux.go:132] "Using iptables Proxier"
	I1207 23:12:03.650181       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:12:03.650669       1 server.go:527] "Version info" version="v1.34.2"
	I1207 23:12:03.650718       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:12:03.653161       1 config.go:200] "Starting service config controller"
	I1207 23:12:03.653188       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:12:03.653219       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:12:03.653225       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:12:03.653244       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:12:03.653256       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:12:03.653346       1 config.go:309] "Starting node config controller"
	I1207 23:12:03.653353       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:12:03.653366       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:12:03.753518       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 23:12:03.753552       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 23:12:03.753868       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [db1d97b6874004dcfa1bfc301e8470ac6e8ab810f5002178c4d64e0899af2340] <==
	I1207 23:12:03.035857       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 23:12:03.035870       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:12:03.035870       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1207 23:12:03.035879       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 23:12:03.036226       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 23:12:03.036552       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 23:12:03.136624       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1207 23:12:03.136650       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:12:03.136707       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	E1207 23:16:48.102306       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-q7q27\": pod kube-proxy-q7q27 is already assigned to node \"ha-907658-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-q7q27" node="ha-907658-m05"
	E1207 23:16:48.102417       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 52c6e488-4924-4b01-af85-5017f7346151(kube-system/kube-proxy-q7q27) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-q7q27"
	E1207 23:16:48.102459       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-q7q27\": pod kube-proxy-q7q27 is already assigned to node \"ha-907658-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-q7q27"
	E1207 23:16:48.102542       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-9bldj\": pod kindnet-9bldj is already assigned to node \"ha-907658-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-9bldj" node="ha-907658-m05"
	E1207 23:16:48.102584       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 5889a075-b820-4ab0-91ae-ac6f7738ee64(kube-system/kindnet-9bldj) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-9bldj"
	E1207 23:16:48.103893       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-9bldj\": pod kindnet-9bldj is already assigned to node \"ha-907658-m05\"" logger="UnhandledError" pod="kube-system/kindnet-9bldj"
	I1207 23:16:48.104274       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-9bldj" node="ha-907658-m05"
	I1207 23:16:48.103898       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-q7q27" node="ha-907658-m05"
	E1207 23:16:48.151216       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-cfsf2\": pod kindnet-cfsf2 is already assigned to node \"ha-907658-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-cfsf2" node="ha-907658-m05"
	E1207 23:16:48.151287       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 903a9169-be6b-43d6-a420-97fbf733ae2e(kube-system/kindnet-cfsf2) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-cfsf2"
	E1207 23:16:48.151313       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-cfsf2\": pod kindnet-cfsf2 is already assigned to node \"ha-907658-m05\"" logger="UnhandledError" pod="kube-system/kindnet-cfsf2"
	I1207 23:16:48.153264       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-cfsf2" node="ha-907658-m05"
	E1207 23:16:48.161551       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-m7znc\": pod kube-proxy-m7znc is already assigned to node \"ha-907658-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-m7znc" node="ha-907658-m05"
	E1207 23:16:48.161616       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 4a3b595d-5f99-4c31-8948-8b33fdd5bdd5(kube-system/kube-proxy-m7znc) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-m7znc"
	E1207 23:16:48.161639       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-m7znc\": pod kube-proxy-m7znc is already assigned to node \"ha-907658-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-m7znc"
	I1207 23:16:48.162820       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-m7znc" node="ha-907658-m05"
	
	
	==> kubelet <==
	Dec 07 23:12:00 ha-907658 kubelet[746]: E1207 23:12:00.081635     746 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-907658\" not found" node="ha-907658"
	Dec 07 23:12:01 ha-907658 kubelet[746]: E1207 23:12:01.083780     746 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-907658\" not found" node="ha-907658"
	Dec 07 23:12:01 ha-907658 kubelet[746]: E1207 23:12:01.083932     746 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-907658\" not found" node="ha-907658"
	Dec 07 23:12:01 ha-907658 kubelet[746]: E1207 23:12:01.084030     746 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-907658\" not found" node="ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.042925     746 apiserver.go:52] "Watching apiserver"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.045963     746 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: E1207 23:12:03.069383     746 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-ha-907658\" already exists" pod="kube-system/etcd-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.069626     746 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: E1207 23:12:03.087189     746 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-907658\" already exists" pod="kube-system/kube-apiserver-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.091705     746 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.100510     746 kubelet_node_status.go:124] "Node was previously registered" node="ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.100646     746 kubelet_node_status.go:78] "Successfully registered node" node="ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.100685     746 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.101661     746 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 07 23:12:03 ha-907658 kubelet[746]: E1207 23:12:03.104485     746 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-907658\" already exists" pod="kube-system/kube-controller-manager-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.104628     746 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: E1207 23:12:03.115174     746 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-907658\" already exists" pod="kube-system/kube-scheduler-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.115385     746 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: E1207 23:12:03.125044     746 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-vip-ha-907658\" already exists" pod="kube-system/kube-vip-ha-907658"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.146852     746 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.199347     746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0ba957f-b2b5-4e7a-b93a-b3619c1e4cf9-xtables-lock\") pod \"kube-proxy-r5c77\" (UID: \"c0ba957f-b2b5-4e7a-b93a-b3619c1e4cf9\") " pod="kube-system/kube-proxy-r5c77"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.199404     746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c0ef1d7-39de-46ce-b16b-4d2794e7dc20-lib-modules\") pod \"kindnet-hzfvq\" (UID: \"8c0ef1d7-39de-46ce-b16b-4d2794e7dc20\") " pod="kube-system/kindnet-hzfvq"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.200064     746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8c0ef1d7-39de-46ce-b16b-4d2794e7dc20-cni-cfg\") pod \"kindnet-hzfvq\" (UID: \"8c0ef1d7-39de-46ce-b16b-4d2794e7dc20\") " pod="kube-system/kindnet-hzfvq"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.200129     746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c0ef1d7-39de-46ce-b16b-4d2794e7dc20-xtables-lock\") pod \"kindnet-hzfvq\" (UID: \"8c0ef1d7-39de-46ce-b16b-4d2794e7dc20\") " pod="kube-system/kindnet-hzfvq"
	Dec 07 23:12:03 ha-907658 kubelet[746]: I1207 23:12:03.200193     746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0ba957f-b2b5-4e7a-b93a-b3619c1e4cf9-lib-modules\") pod \"kube-proxy-r5c77\" (UID: \"c0ba957f-b2b5-4e7a-b93a-b3619c1e4cf9\") " pod="kube-system/kube-proxy-r5c77"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-907658 -n ha-907658
helpers_test.go:269: (dbg) Run:  kubectl --context ha-907658 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.88s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-065588 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-065588 --output=json --user=testUser: exit status 80 (1.775500632s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b99000c0-8de3-48f4-86c5-7d482c8482b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-065588 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"fe2cc5ae-b0c1-4937-a817-2fe15d33a227","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-07T23:18:12Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"7fca41b4-6a4f-4939-b9f8-dbbc2eb78cc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-065588 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.78s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.83s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-065588 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-065588 --output=json --user=testUser: exit status 80 (1.827777018s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4d56310a-5e2a-4555-93f1-a628e83d85d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-065588 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"9151b411-7cc9-431f-a33f-83eb1f9bbc7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-07T23:18:13Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"03eb78f7-61d9-488b-b69a-2e88fb137513","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-065588 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.83s)

                                                
                                    
x
+
TestPause/serial/Pause (7.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-567110 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-567110 --alsologtostderr -v=5: exit status 80 (2.387785308s)

                                                
                                                
-- stdout --
	* Pausing node pause-567110 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:30:10.813514  588902 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:30:10.813837  588902 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:30:10.813851  588902 out.go:374] Setting ErrFile to fd 2...
	I1207 23:30:10.813858  588902 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:30:10.814212  588902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:30:10.814599  588902 out.go:368] Setting JSON to false
	I1207 23:30:10.814623  588902 mustload.go:66] Loading cluster: pause-567110
	I1207 23:30:10.815085  588902 config.go:182] Loaded profile config "pause-567110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:30:10.815536  588902 cli_runner.go:164] Run: docker container inspect pause-567110 --format={{.State.Status}}
	I1207 23:30:10.836913  588902 host.go:66] Checking if "pause-567110" exists ...
	I1207 23:30:10.837435  588902 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:30:10.898671  588902 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-07 23:30:10.887735319 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:30:10.899565  588902 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-567110 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1207 23:30:10.901830  588902 out.go:179] * Pausing node pause-567110 ... 
	I1207 23:30:10.903285  588902 host.go:66] Checking if "pause-567110" exists ...
	I1207 23:30:10.903591  588902 ssh_runner.go:195] Run: systemctl --version
	I1207 23:30:10.903632  588902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-567110
	I1207 23:30:10.925088  588902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33348 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/pause-567110/id_rsa Username:docker}
	I1207 23:30:11.023884  588902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:30:11.039645  588902 pause.go:52] kubelet running: true
	I1207 23:30:11.039738  588902 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1207 23:30:11.173393  588902 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1207 23:30:11.173503  588902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1207 23:30:11.249569  588902 cri.go:89] found id: "4595c4c00890301c00d701d33c58933a77cd29ab97cd2e5304a777d15cefd0d0"
	I1207 23:30:11.249601  588902 cri.go:89] found id: "a860660b71bbc8c017e2fb5454ef22c9f822a2ada1b89b2183e9f7d7909a1349"
	I1207 23:30:11.249606  588902 cri.go:89] found id: "2b12933a71b850c78d62f283e85fc636b88a29fd602da8d9655a289c5b8af04d"
	I1207 23:30:11.249610  588902 cri.go:89] found id: "1fc2c0c292feead088aa3d54e0da73fa07c7f9e1766d492c830da817741b7757"
	I1207 23:30:11.249613  588902 cri.go:89] found id: "da3b7831e8edd04c91875054bbf6f2f81ca02cd681b035e3d8c5dbf875fbe218"
	I1207 23:30:11.249616  588902 cri.go:89] found id: "91f7e7211d7b8742ae02b0af28d5305d7fddfa057413826b3a792681a2981e86"
	I1207 23:30:11.249619  588902 cri.go:89] found id: "5011fb898c31f28341021b2a0ef4276eb0260fd3056ad0df530c70696174d1b8"
	I1207 23:30:11.249621  588902 cri.go:89] found id: ""
	I1207 23:30:11.249668  588902 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 23:30:11.262177  588902 retry.go:31] will retry after 270.83214ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:30:11Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:30:11.533737  588902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:30:11.548305  588902 pause.go:52] kubelet running: false
	I1207 23:30:11.548408  588902 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1207 23:30:11.701735  588902 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1207 23:30:11.701848  588902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1207 23:30:11.790722  588902 cri.go:89] found id: "4595c4c00890301c00d701d33c58933a77cd29ab97cd2e5304a777d15cefd0d0"
	I1207 23:30:11.790749  588902 cri.go:89] found id: "a860660b71bbc8c017e2fb5454ef22c9f822a2ada1b89b2183e9f7d7909a1349"
	I1207 23:30:11.790754  588902 cri.go:89] found id: "2b12933a71b850c78d62f283e85fc636b88a29fd602da8d9655a289c5b8af04d"
	I1207 23:30:11.790757  588902 cri.go:89] found id: "1fc2c0c292feead088aa3d54e0da73fa07c7f9e1766d492c830da817741b7757"
	I1207 23:30:11.790760  588902 cri.go:89] found id: "da3b7831e8edd04c91875054bbf6f2f81ca02cd681b035e3d8c5dbf875fbe218"
	I1207 23:30:11.790762  588902 cri.go:89] found id: "91f7e7211d7b8742ae02b0af28d5305d7fddfa057413826b3a792681a2981e86"
	I1207 23:30:11.790765  588902 cri.go:89] found id: "5011fb898c31f28341021b2a0ef4276eb0260fd3056ad0df530c70696174d1b8"
	I1207 23:30:11.790768  588902 cri.go:89] found id: ""
	I1207 23:30:11.790815  588902 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 23:30:11.803267  588902 retry.go:31] will retry after 331.720575ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:30:11Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:30:12.135662  588902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:30:12.163117  588902 pause.go:52] kubelet running: false
	I1207 23:30:12.163244  588902 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1207 23:30:12.336955  588902 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1207 23:30:12.337061  588902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1207 23:30:12.413435  588902 cri.go:89] found id: "4595c4c00890301c00d701d33c58933a77cd29ab97cd2e5304a777d15cefd0d0"
	I1207 23:30:12.413469  588902 cri.go:89] found id: "a860660b71bbc8c017e2fb5454ef22c9f822a2ada1b89b2183e9f7d7909a1349"
	I1207 23:30:12.413474  588902 cri.go:89] found id: "2b12933a71b850c78d62f283e85fc636b88a29fd602da8d9655a289c5b8af04d"
	I1207 23:30:12.413478  588902 cri.go:89] found id: "1fc2c0c292feead088aa3d54e0da73fa07c7f9e1766d492c830da817741b7757"
	I1207 23:30:12.413483  588902 cri.go:89] found id: "da3b7831e8edd04c91875054bbf6f2f81ca02cd681b035e3d8c5dbf875fbe218"
	I1207 23:30:12.413487  588902 cri.go:89] found id: "91f7e7211d7b8742ae02b0af28d5305d7fddfa057413826b3a792681a2981e86"
	I1207 23:30:12.413532  588902 cri.go:89] found id: "5011fb898c31f28341021b2a0ef4276eb0260fd3056ad0df530c70696174d1b8"
	I1207 23:30:12.413545  588902 cri.go:89] found id: ""
	I1207 23:30:12.413594  588902 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 23:30:12.427305  588902 retry.go:31] will retry after 421.351797ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:30:12Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:30:12.849729  588902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:30:12.863081  588902 pause.go:52] kubelet running: false
	I1207 23:30:12.863151  588902 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1207 23:30:13.011511  588902 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1207 23:30:13.011619  588902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1207 23:30:13.100181  588902 cri.go:89] found id: "4595c4c00890301c00d701d33c58933a77cd29ab97cd2e5304a777d15cefd0d0"
	I1207 23:30:13.100206  588902 cri.go:89] found id: "a860660b71bbc8c017e2fb5454ef22c9f822a2ada1b89b2183e9f7d7909a1349"
	I1207 23:30:13.100212  588902 cri.go:89] found id: "2b12933a71b850c78d62f283e85fc636b88a29fd602da8d9655a289c5b8af04d"
	I1207 23:30:13.100218  588902 cri.go:89] found id: "1fc2c0c292feead088aa3d54e0da73fa07c7f9e1766d492c830da817741b7757"
	I1207 23:30:13.100222  588902 cri.go:89] found id: "da3b7831e8edd04c91875054bbf6f2f81ca02cd681b035e3d8c5dbf875fbe218"
	I1207 23:30:13.100226  588902 cri.go:89] found id: "91f7e7211d7b8742ae02b0af28d5305d7fddfa057413826b3a792681a2981e86"
	I1207 23:30:13.100229  588902 cri.go:89] found id: "5011fb898c31f28341021b2a0ef4276eb0260fd3056ad0df530c70696174d1b8"
	I1207 23:30:13.100233  588902 cri.go:89] found id: ""
	I1207 23:30:13.100282  588902 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 23:30:13.120109  588902 out.go:203] 
	W1207 23:30:13.121593  588902 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:30:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:30:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1207 23:30:13.121624  588902 out.go:285] * 
	* 
	W1207 23:30:13.127974  588902 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 23:30:13.129492  588902 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-567110 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-567110
helpers_test.go:243: (dbg) docker inspect pause-567110:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ae93836725d25e90b38da77be4c0a7bdd769149667355bf27d96e246b31a48de",
	        "Created": "2025-12-07T23:29:20.398318068Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 575259,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:29:21.648394063Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/ae93836725d25e90b38da77be4c0a7bdd769149667355bf27d96e246b31a48de/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ae93836725d25e90b38da77be4c0a7bdd769149667355bf27d96e246b31a48de/hostname",
	        "HostsPath": "/var/lib/docker/containers/ae93836725d25e90b38da77be4c0a7bdd769149667355bf27d96e246b31a48de/hosts",
	        "LogPath": "/var/lib/docker/containers/ae93836725d25e90b38da77be4c0a7bdd769149667355bf27d96e246b31a48de/ae93836725d25e90b38da77be4c0a7bdd769149667355bf27d96e246b31a48de-json.log",
	        "Name": "/pause-567110",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-567110:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-567110",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ae93836725d25e90b38da77be4c0a7bdd769149667355bf27d96e246b31a48de",
	                "LowerDir": "/var/lib/docker/overlay2/05caf5a6e49460d67fdd485b78726d92ce852c5bc0a30a77759fe4df24e81263-init/diff:/var/lib/docker/overlay2/d2e9c5481c0f5ed3745e4b3c85b207e8e3f273f5a1d285f7bc7bfa20976ad16e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/05caf5a6e49460d67fdd485b78726d92ce852c5bc0a30a77759fe4df24e81263/merged",
	                "UpperDir": "/var/lib/docker/overlay2/05caf5a6e49460d67fdd485b78726d92ce852c5bc0a30a77759fe4df24e81263/diff",
	                "WorkDir": "/var/lib/docker/overlay2/05caf5a6e49460d67fdd485b78726d92ce852c5bc0a30a77759fe4df24e81263/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-567110",
	                "Source": "/var/lib/docker/volumes/pause-567110/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-567110",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-567110",
	                "name.minikube.sigs.k8s.io": "pause-567110",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a4392a06098c60b8ce10d96ec568ada3eb6d15854fc24c8a6f75ad0abdbb1f10",
	            "SandboxKey": "/var/run/docker/netns/a4392a06098c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33348"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33349"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33352"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33350"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33351"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-567110": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6269e2196c575c2b89935e29c15b68831900c6fea37d016ed6ccbc106832311b",
	                    "EndpointID": "2e30e52259e0240aa1431f2591164c8ac648599dc27ffae60674bc49ea80bfd8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "12:19:b5:9e:0f:03",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-567110",
	                        "ae93836725d2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-567110 -n pause-567110
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-567110 -n pause-567110: exit status 2 (379.877861ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-567110 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-567110 logs -n 25: (2.620501529s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                        │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-899153 --schedule 5m -v=5 --alsologtostderr                                                     │ scheduled-stop-899153       │ jenkins │ v1.37.0 │ 07 Dec 25 23:27 UTC │                     │
	│ stop    │ -p scheduled-stop-899153 --schedule 5m -v=5 --alsologtostderr                                                     │ scheduled-stop-899153       │ jenkins │ v1.37.0 │ 07 Dec 25 23:27 UTC │                     │
	│ stop    │ -p scheduled-stop-899153 --schedule 5m -v=5 --alsologtostderr                                                     │ scheduled-stop-899153       │ jenkins │ v1.37.0 │ 07 Dec 25 23:27 UTC │                     │
	│ stop    │ -p scheduled-stop-899153 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-899153       │ jenkins │ v1.37.0 │ 07 Dec 25 23:27 UTC │                     │
	│ stop    │ -p scheduled-stop-899153 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-899153       │ jenkins │ v1.37.0 │ 07 Dec 25 23:27 UTC │                     │
	│ stop    │ -p scheduled-stop-899153 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-899153       │ jenkins │ v1.37.0 │ 07 Dec 25 23:27 UTC │                     │
	│ stop    │ -p scheduled-stop-899153 --cancel-scheduled                                                                       │ scheduled-stop-899153       │ jenkins │ v1.37.0 │ 07 Dec 25 23:27 UTC │ 07 Dec 25 23:27 UTC │
	│ stop    │ -p scheduled-stop-899153 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-899153       │ jenkins │ v1.37.0 │ 07 Dec 25 23:28 UTC │                     │
	│ stop    │ -p scheduled-stop-899153 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-899153       │ jenkins │ v1.37.0 │ 07 Dec 25 23:28 UTC │                     │
	│ stop    │ -p scheduled-stop-899153 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-899153       │ jenkins │ v1.37.0 │ 07 Dec 25 23:28 UTC │ 07 Dec 25 23:28 UTC │
	│ delete  │ -p scheduled-stop-899153                                                                                          │ scheduled-stop-899153       │ jenkins │ v1.37.0 │ 07 Dec 25 23:28 UTC │ 07 Dec 25 23:28 UTC │
	│ start   │ -p insufficient-storage-517111 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio  │ insufficient-storage-517111 │ jenkins │ v1.37.0 │ 07 Dec 25 23:28 UTC │                     │
	│ delete  │ -p insufficient-storage-517111                                                                                    │ insufficient-storage-517111 │ jenkins │ v1.37.0 │ 07 Dec 25 23:29 UTC │ 07 Dec 25 23:29 UTC │
	│ start   │ -p pause-567110 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio         │ pause-567110                │ jenkins │ v1.37.0 │ 07 Dec 25 23:29 UTC │ 07 Dec 25 23:30 UTC │
	│ start   │ -p force-systemd-env-599541 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio        │ force-systemd-env-599541    │ jenkins │ v1.37.0 │ 07 Dec 25 23:29 UTC │ 07 Dec 25 23:29 UTC │
	│ start   │ -p offline-crio-504484 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio │ offline-crio-504484         │ jenkins │ v1.37.0 │ 07 Dec 25 23:29 UTC │ 07 Dec 25 23:30 UTC │
	│ start   │ -p stopped-upgrade-604160 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ stopped-upgrade-604160      │ jenkins │ v1.35.0 │ 07 Dec 25 23:29 UTC │ 07 Dec 25 23:30 UTC │
	│ delete  │ -p force-systemd-env-599541                                                                                       │ force-systemd-env-599541    │ jenkins │ v1.37.0 │ 07 Dec 25 23:29 UTC │ 07 Dec 25 23:29 UTC │
	│ start   │ -p running-upgrade-991102 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ running-upgrade-991102      │ jenkins │ v1.35.0 │ 07 Dec 25 23:29 UTC │ 07 Dec 25 23:30 UTC │
	│ stop    │ stopped-upgrade-604160 stop                                                                                       │ stopped-upgrade-604160      │ jenkins │ v1.35.0 │ 07 Dec 25 23:30 UTC │                     │
	│ start   │ -p pause-567110 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                  │ pause-567110                │ jenkins │ v1.37.0 │ 07 Dec 25 23:30 UTC │ 07 Dec 25 23:30 UTC │
	│ delete  │ -p offline-crio-504484                                                                                            │ offline-crio-504484         │ jenkins │ v1.37.0 │ 07 Dec 25 23:30 UTC │ 07 Dec 25 23:30 UTC │
	│ pause   │ -p pause-567110 --alsologtostderr -v=5                                                                            │ pause-567110                │ jenkins │ v1.37.0 │ 07 Dec 25 23:30 UTC │                     │
	│ start   │ -p missing-upgrade-776369 --memory=3072 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-776369      │ jenkins │ v1.35.0 │ 07 Dec 25 23:30 UTC │                     │
	│ start   │ -p running-upgrade-991102 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio          │ running-upgrade-991102      │ jenkins │ v1.37.0 │ 07 Dec 25 23:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:30:12
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:30:12.950593  589793 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:30:12.950904  589793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:30:12.950919  589793 out.go:374] Setting ErrFile to fd 2...
	I1207 23:30:12.950925  589793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:30:12.951194  589793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:30:12.951702  589793 out.go:368] Setting JSON to false
	I1207 23:30:12.952812  589793 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7957,"bootTime":1765142256,"procs":277,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:30:12.952881  589793 start.go:143] virtualization: kvm guest
	I1207 23:30:12.956750  589793 out.go:179] * [running-upgrade-991102] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:30:12.958298  589793 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:30:12.958348  589793 notify.go:221] Checking for updates...
	I1207 23:30:12.961150  589793 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:30:12.964653  589793 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:30:12.966441  589793 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:30:12.967843  589793 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:30:12.969174  589793 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:30:12.971287  589793 config.go:182] Loaded profile config "running-upgrade-991102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1207 23:30:12.973754  589793 out.go:179] * Kubernetes 1.34.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.2
	I1207 23:30:12.975034  589793 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:30:13.003394  589793 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:30:13.003502  589793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:30:13.074116  589793 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:false NGoroutines:94 SystemTime:2025-12-07 23:30:13.05909164 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:30:13.074303  589793 docker.go:319] overlay module found
	I1207 23:30:13.079438  589793 out.go:179] * Using the docker driver based on existing profile
	I1207 23:30:13.080875  589793 start.go:309] selected driver: docker
	I1207 23:30:13.080897  589793 start.go:927] validating driver "docker" against &{Name:running-upgrade-991102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:running-upgrade-991102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:30:13.081025  589793 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:30:13.082008  589793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:30:13.155614  589793 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:87 SystemTime:2025-12-07 23:30:13.14330112 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:30:13.156030  589793 cni.go:84] Creating CNI manager for ""
	I1207 23:30:13.156115  589793 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:30:13.156179  589793 start.go:353] cluster config:
	{Name:running-upgrade-991102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:running-upgrade-991102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:30:13.157977  589793 out.go:179] * Starting "running-upgrade-991102" primary control-plane node in "running-upgrade-991102" cluster
	I1207 23:30:13.161136  589793 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:30:13.163873  589793 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:30:13.165205  589793 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1207 23:30:13.165249  589793 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1207 23:30:13.165276  589793 cache.go:65] Caching tarball of preloaded images
	I1207 23:30:13.165305  589793 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1207 23:30:13.165406  589793 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:30:13.165423  589793 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1207 23:30:13.165569  589793 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/running-upgrade-991102/config.json ...
	I1207 23:30:13.190687  589793 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I1207 23:30:13.190727  589793 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I1207 23:30:13.190749  589793 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:30:13.190798  589793 start.go:360] acquireMachinesLock for running-upgrade-991102: {Name:mk7834dff192321c4107ee7bf103d5e66678cd59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:30:13.190885  589793 start.go:364] duration metric: took 60.867µs to acquireMachinesLock for "running-upgrade-991102"
	I1207 23:30:13.190909  589793 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:30:13.190916  589793 fix.go:54] fixHost starting: 
	I1207 23:30:13.191176  589793 cli_runner.go:164] Run: docker container inspect running-upgrade-991102 --format={{.State.Status}}
	I1207 23:30:13.212436  589793 fix.go:112] recreateIfNeeded on running-upgrade-991102: state=Running err=<nil>
	W1207 23:30:13.212480  589793 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.490115299Z" level=info msg="RDT not available in the host system"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.490125085Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.490973493Z" level=info msg="Conmon does support the --sync option"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.490995109Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.491009903Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.491794962Z" level=info msg="Conmon does support the --sync option"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.491811133Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.496418761Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.496450797Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.497030666Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.497480853Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.497551798Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.579048384Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-whwnc Namespace:kube-system ID:16def7bc779e0b7a1e796a2d631da036c0590e5de0e6b8fc2c8c4faa2820522a UID:24fc67f2-09be-4bcb-96d7-59db47d6c5f4 NetNS:/var/run/netns/e3c1df5a-b3ad-4dc9-b2cb-30bcfb01a89d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00078e230}] Aliases:map[]}"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.579272856Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-whwnc for CNI network kindnet (type=ptp)"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.579777886Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.579803981Z" level=info msg="Starting seccomp notifier watcher"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.579881366Z" level=info msg="Create NRI interface"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.580036469Z" level=info msg="built-in NRI default validator is disabled"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.580053266Z" level=info msg="runtime interface created"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.580063117Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.58006858Z" level=info msg="runtime interface starting up..."
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.58007332Z" level=info msg="starting plugins..."
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.580085473Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.580473987Z" level=info msg="No systemd watchdog enabled"
	Dec 07 23:30:07 pause-567110 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	4595c4c008903       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   0                   16def7bc779e0       coredns-66bc5c9577-whwnc               kube-system
	a860660b71bbc       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   24 seconds ago      Running             kube-proxy                0                   9d66dbff07d4c       kube-proxy-qjmnd                       kube-system
	2b12933a71b85       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   24 seconds ago      Running             kindnet-cni               0                   56f2082678313       kindnet-ddlh6                          kube-system
	1fc2c0c292fee       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   35 seconds ago      Running             kube-scheduler            0                   5f556c4f5c321       kube-scheduler-pause-567110            kube-system
	da3b7831e8edd       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   35 seconds ago      Running             kube-controller-manager   0                   1e46c89a9b6b2       kube-controller-manager-pause-567110   kube-system
	91f7e7211d7b8       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   35 seconds ago      Running             etcd                      0                   98c3c9cae89bb       etcd-pause-567110                      kube-system
	5011fb898c31f       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   35 seconds ago      Running             kube-apiserver            0                   b3783df2e488f       kube-apiserver-pause-567110            kube-system
	
	
	==> coredns [4595c4c00890301c00d701d33c58933a77cd29ab97cd2e5304a777d15cefd0d0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43316 - 48555 "HINFO IN 3103597898622812977.4003902187067832252. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063914178s
	
	
	==> describe nodes <==
	Name:               pause-567110
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-567110
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=pause-567110
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_29_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:29:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-567110
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:30:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:30:06 +0000   Sun, 07 Dec 2025 23:29:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:30:06 +0000   Sun, 07 Dec 2025 23:29:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:30:06 +0000   Sun, 07 Dec 2025 23:29:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:30:06 +0000   Sun, 07 Dec 2025 23:30:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-567110
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                f5df8d5d-bc16-40e6-9b8b-cadca0a058e5
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-whwnc                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-567110                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-ddlh6                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-pause-567110             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-pause-567110    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-qjmnd                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-567110             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node pause-567110 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node pause-567110 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node pause-567110 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node pause-567110 event: Registered Node pause-567110 in Controller
	  Normal  NodeReady                13s   kubelet          Node pause-567110 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.006319] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.495443] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006323] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494714] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006745] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494455] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007157] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493953] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007413] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493695] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007143] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493798] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007702] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493076] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008458] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493060] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008891] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492811] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007996] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493243] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008588] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492559] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008931] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.491699] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.010378] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	
	
	==> etcd [91f7e7211d7b8742ae02b0af28d5305d7fddfa057413826b3a792681a2981e86] <==
	{"level":"warn","ts":"2025-12-07T23:29:41.910862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:41.920189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:41.935241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:41.946524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:41.954121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:41.961728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:41.968699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:41.976050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:41.983749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:41.997528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.004595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.012268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.019317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.027404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.034378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.042908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.061781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.064988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.072961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.079884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.087245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.103864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.110482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.118065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.187645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57248","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:30:15 up  2:12,  0 user,  load average: 3.14, 1.62, 1.46
	Linux pause-567110 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2b12933a71b850c78d62f283e85fc636b88a29fd602da8d9655a289c5b8af04d] <==
	I1207 23:29:51.357568       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 23:29:51.357824       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1207 23:29:51.358007       1 main.go:148] setting mtu 1500 for CNI 
	I1207 23:29:51.358030       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 23:29:51.358061       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T23:29:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 23:29:51.562688       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 23:29:51.562716       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 23:29:51.562732       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 23:29:51.562891       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 23:29:51.863843       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 23:29:51.863870       1 metrics.go:72] Registering metrics
	I1207 23:29:51.863987       1 controller.go:711] "Syncing nftables rules"
	I1207 23:30:01.564490       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1207 23:30:01.564575       1 main.go:301] handling current node
	I1207 23:30:11.567470       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1207 23:30:11.567515       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5011fb898c31f28341021b2a0ef4276eb0260fd3056ad0df530c70696174d1b8] <==
	I1207 23:29:42.799448       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1207 23:29:42.801132       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 23:29:42.806375       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1207 23:29:42.806414       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1207 23:29:42.815651       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1207 23:29:42.815691       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:29:42.823841       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:29:42.823847       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1207 23:29:43.701561       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1207 23:29:43.705931       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1207 23:29:43.705951       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1207 23:29:44.281053       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 23:29:44.324592       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 23:29:44.404853       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1207 23:29:44.410829       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1207 23:29:44.411917       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 23:29:44.417452       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:29:44.746981       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 23:29:45.581282       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 23:29:45.594671       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1207 23:29:45.604371       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 23:29:50.557957       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 23:29:50.665580       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:29:50.674041       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:29:50.804362       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [da3b7831e8edd04c91875054bbf6f2f81ca02cd681b035e3d8c5dbf875fbe218] <==
	I1207 23:29:49.747594       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1207 23:29:49.747605       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1207 23:29:49.747682       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1207 23:29:49.747797       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1207 23:29:49.747857       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1207 23:29:49.747595       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1207 23:29:49.747915       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1207 23:29:49.748065       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1207 23:29:49.748481       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1207 23:29:49.748579       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1207 23:29:49.750239       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1207 23:29:49.751855       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1207 23:29:49.751940       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1207 23:29:49.751989       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1207 23:29:49.752000       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1207 23:29:49.752006       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1207 23:29:49.753081       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1207 23:29:49.757431       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 23:29:49.762298       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-567110" podCIDRs=["10.244.0.0/24"]
	I1207 23:29:49.772211       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 23:29:49.785434       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1207 23:29:49.785459       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1207 23:29:49.785471       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1207 23:29:49.795308       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1207 23:30:04.698471       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a860660b71bbc8c017e2fb5454ef22c9f822a2ada1b89b2183e9f7d7909a1349] <==
	I1207 23:29:51.269945       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:29:51.342296       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 23:29:51.443133       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 23:29:51.443170       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1207 23:29:51.443467       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:29:51.465074       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:29:51.465165       1 server_linux.go:132] "Using iptables Proxier"
	I1207 23:29:51.470757       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:29:51.471209       1 server.go:527] "Version info" version="v1.34.2"
	I1207 23:29:51.471247       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:29:51.472906       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:29:51.472930       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:29:51.472975       1 config.go:200] "Starting service config controller"
	I1207 23:29:51.472982       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:29:51.473178       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:29:51.473185       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:29:51.473410       1 config.go:309] "Starting node config controller"
	I1207 23:29:51.473425       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:29:51.473432       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:29:51.573026       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 23:29:51.573072       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 23:29:51.573283       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [1fc2c0c292feead088aa3d54e0da73fa07c7f9e1766d492c830da817741b7757] <==
	E1207 23:29:42.790025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 23:29:42.790059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1207 23:29:42.790070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1207 23:29:42.790117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 23:29:42.790155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1207 23:29:42.790172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1207 23:29:42.790168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1207 23:29:42.790546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1207 23:29:42.790632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1207 23:29:42.790742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1207 23:29:42.790780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1207 23:29:42.790859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1207 23:29:43.676701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1207 23:29:43.728192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1207 23:29:43.736318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1207 23:29:43.739615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1207 23:29:43.745878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1207 23:29:43.751055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1207 23:29:43.800132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1207 23:29:43.830613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1207 23:29:43.872898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 23:29:43.904395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1207 23:29:43.948568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1207 23:29:44.016165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1207 23:29:46.584821       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 07 23:29:50 pause-567110 kubelet[1336]: I1207 23:29:50.864258    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30eda86a-b6b3-42c8-95c5-fe75c3e3ce7f-xtables-lock\") pod \"kindnet-ddlh6\" (UID: \"30eda86a-b6b3-42c8-95c5-fe75c3e3ce7f\") " pod="kube-system/kindnet-ddlh6"
	Dec 07 23:29:50 pause-567110 kubelet[1336]: I1207 23:29:50.864298    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv78f\" (UniqueName: \"kubernetes.io/projected/30eda86a-b6b3-42c8-95c5-fe75c3e3ce7f-kube-api-access-mv78f\") pod \"kindnet-ddlh6\" (UID: \"30eda86a-b6b3-42c8-95c5-fe75c3e3ce7f\") " pod="kube-system/kindnet-ddlh6"
	Dec 07 23:29:50 pause-567110 kubelet[1336]: I1207 23:29:50.864366    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30eda86a-b6b3-42c8-95c5-fe75c3e3ce7f-lib-modules\") pod \"kindnet-ddlh6\" (UID: \"30eda86a-b6b3-42c8-95c5-fe75c3e3ce7f\") " pod="kube-system/kindnet-ddlh6"
	Dec 07 23:29:50 pause-567110 kubelet[1336]: I1207 23:29:50.965773    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9cc4c179-84c0-48e9-83f8-9c2334b7f51f-kube-proxy\") pod \"kube-proxy-qjmnd\" (UID: \"9cc4c179-84c0-48e9-83f8-9c2334b7f51f\") " pod="kube-system/kube-proxy-qjmnd"
	Dec 07 23:29:50 pause-567110 kubelet[1336]: I1207 23:29:50.966131    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9cc4c179-84c0-48e9-83f8-9c2334b7f51f-xtables-lock\") pod \"kube-proxy-qjmnd\" (UID: \"9cc4c179-84c0-48e9-83f8-9c2334b7f51f\") " pod="kube-system/kube-proxy-qjmnd"
	Dec 07 23:29:50 pause-567110 kubelet[1336]: I1207 23:29:50.966165    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cc4c179-84c0-48e9-83f8-9c2334b7f51f-lib-modules\") pod \"kube-proxy-qjmnd\" (UID: \"9cc4c179-84c0-48e9-83f8-9c2334b7f51f\") " pod="kube-system/kube-proxy-qjmnd"
	Dec 07 23:29:50 pause-567110 kubelet[1336]: I1207 23:29:50.966197    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8r6d\" (UniqueName: \"kubernetes.io/projected/9cc4c179-84c0-48e9-83f8-9c2334b7f51f-kube-api-access-r8r6d\") pod \"kube-proxy-qjmnd\" (UID: \"9cc4c179-84c0-48e9-83f8-9c2334b7f51f\") " pod="kube-system/kube-proxy-qjmnd"
	Dec 07 23:29:51 pause-567110 kubelet[1336]: I1207 23:29:51.487493    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qjmnd" podStartSLOduration=1.487472108 podStartE2EDuration="1.487472108s" podCreationTimestamp="2025-12-07 23:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:29:51.487138501 +0000 UTC m=+6.148050669" watchObservedRunningTime="2025-12-07 23:29:51.487472108 +0000 UTC m=+6.148384275"
	Dec 07 23:29:51 pause-567110 kubelet[1336]: I1207 23:29:51.497104    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-ddlh6" podStartSLOduration=1.4970776639999999 podStartE2EDuration="1.497077664s" podCreationTimestamp="2025-12-07 23:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:29:51.496902144 +0000 UTC m=+6.157814323" watchObservedRunningTime="2025-12-07 23:29:51.497077664 +0000 UTC m=+6.157989832"
	Dec 07 23:30:02 pause-567110 kubelet[1336]: I1207 23:30:02.014448    1336 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 07 23:30:02 pause-567110 kubelet[1336]: I1207 23:30:02.143178    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfn55\" (UniqueName: \"kubernetes.io/projected/24fc67f2-09be-4bcb-96d7-59db47d6c5f4-kube-api-access-pfn55\") pod \"coredns-66bc5c9577-whwnc\" (UID: \"24fc67f2-09be-4bcb-96d7-59db47d6c5f4\") " pod="kube-system/coredns-66bc5c9577-whwnc"
	Dec 07 23:30:02 pause-567110 kubelet[1336]: I1207 23:30:02.143241    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24fc67f2-09be-4bcb-96d7-59db47d6c5f4-config-volume\") pod \"coredns-66bc5c9577-whwnc\" (UID: \"24fc67f2-09be-4bcb-96d7-59db47d6c5f4\") " pod="kube-system/coredns-66bc5c9577-whwnc"
	Dec 07 23:30:02 pause-567110 kubelet[1336]: I1207 23:30:02.517819    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-whwnc" podStartSLOduration=12.517798536 podStartE2EDuration="12.517798536s" podCreationTimestamp="2025-12-07 23:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:30:02.517794754 +0000 UTC m=+17.178706931" watchObservedRunningTime="2025-12-07 23:30:02.517798536 +0000 UTC m=+17.178710705"
	Dec 07 23:30:07 pause-567110 kubelet[1336]: W1207 23:30:07.448558    1336 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 07 23:30:07 pause-567110 kubelet[1336]: E1207 23:30:07.448675    1336 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Dec 07 23:30:07 pause-567110 kubelet[1336]: E1207 23:30:07.448766    1336 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 07 23:30:07 pause-567110 kubelet[1336]: E1207 23:30:07.448798    1336 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 07 23:30:07 pause-567110 kubelet[1336]: E1207 23:30:07.448817    1336 kubelet.go:2614] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 07 23:30:07 pause-567110 kubelet[1336]: E1207 23:30:07.519219    1336 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 07 23:30:07 pause-567110 kubelet[1336]: E1207 23:30:07.519304    1336 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 07 23:30:07 pause-567110 kubelet[1336]: E1207 23:30:07.519372    1336 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 07 23:30:11 pause-567110 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 07 23:30:11 pause-567110 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 07 23:30:11 pause-567110 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 07 23:30:11 pause-567110 systemd[1]: kubelet.service: Consumed 1.256s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-567110 -n pause-567110
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-567110 -n pause-567110: exit status 2 (493.855517ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-567110 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-567110
helpers_test.go:243: (dbg) docker inspect pause-567110:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ae93836725d25e90b38da77be4c0a7bdd769149667355bf27d96e246b31a48de",
	        "Created": "2025-12-07T23:29:20.398318068Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 575259,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:29:21.648394063Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/ae93836725d25e90b38da77be4c0a7bdd769149667355bf27d96e246b31a48de/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ae93836725d25e90b38da77be4c0a7bdd769149667355bf27d96e246b31a48de/hostname",
	        "HostsPath": "/var/lib/docker/containers/ae93836725d25e90b38da77be4c0a7bdd769149667355bf27d96e246b31a48de/hosts",
	        "LogPath": "/var/lib/docker/containers/ae93836725d25e90b38da77be4c0a7bdd769149667355bf27d96e246b31a48de/ae93836725d25e90b38da77be4c0a7bdd769149667355bf27d96e246b31a48de-json.log",
	        "Name": "/pause-567110",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-567110:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-567110",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ae93836725d25e90b38da77be4c0a7bdd769149667355bf27d96e246b31a48de",
	                "LowerDir": "/var/lib/docker/overlay2/05caf5a6e49460d67fdd485b78726d92ce852c5bc0a30a77759fe4df24e81263-init/diff:/var/lib/docker/overlay2/d2e9c5481c0f5ed3745e4b3c85b207e8e3f273f5a1d285f7bc7bfa20976ad16e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/05caf5a6e49460d67fdd485b78726d92ce852c5bc0a30a77759fe4df24e81263/merged",
	                "UpperDir": "/var/lib/docker/overlay2/05caf5a6e49460d67fdd485b78726d92ce852c5bc0a30a77759fe4df24e81263/diff",
	                "WorkDir": "/var/lib/docker/overlay2/05caf5a6e49460d67fdd485b78726d92ce852c5bc0a30a77759fe4df24e81263/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-567110",
	                "Source": "/var/lib/docker/volumes/pause-567110/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-567110",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-567110",
	                "name.minikube.sigs.k8s.io": "pause-567110",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a4392a06098c60b8ce10d96ec568ada3eb6d15854fc24c8a6f75ad0abdbb1f10",
	            "SandboxKey": "/var/run/docker/netns/a4392a06098c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33348"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33349"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33352"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33350"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33351"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-567110": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6269e2196c575c2b89935e29c15b68831900c6fea37d016ed6ccbc106832311b",
	                    "EndpointID": "2e30e52259e0240aa1431f2591164c8ac648599dc27ffae60674bc49ea80bfd8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "12:19:b5:9e:0f:03",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-567110",
	                        "ae93836725d2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-567110 -n pause-567110
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-567110 -n pause-567110: exit status 2 (377.778972ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-567110 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-567110 logs -n 25: (1.046071433s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                        │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-899153 --schedule 5m -v=5 --alsologtostderr                                                     │ scheduled-stop-899153       │ jenkins │ v1.37.0 │ 07 Dec 25 23:27 UTC │                     │
	│ stop    │ -p scheduled-stop-899153 --schedule 5m -v=5 --alsologtostderr                                                     │ scheduled-stop-899153       │ jenkins │ v1.37.0 │ 07 Dec 25 23:27 UTC │                     │
	│ stop    │ -p scheduled-stop-899153 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-899153       │ jenkins │ v1.37.0 │ 07 Dec 25 23:27 UTC │                     │
	│ stop    │ -p scheduled-stop-899153 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-899153       │ jenkins │ v1.37.0 │ 07 Dec 25 23:27 UTC │                     │
	│ stop    │ -p scheduled-stop-899153 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-899153       │ jenkins │ v1.37.0 │ 07 Dec 25 23:27 UTC │                     │
	│ stop    │ -p scheduled-stop-899153 --cancel-scheduled                                                                       │ scheduled-stop-899153       │ jenkins │ v1.37.0 │ 07 Dec 25 23:27 UTC │ 07 Dec 25 23:27 UTC │
	│ stop    │ -p scheduled-stop-899153 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-899153       │ jenkins │ v1.37.0 │ 07 Dec 25 23:28 UTC │                     │
	│ stop    │ -p scheduled-stop-899153 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-899153       │ jenkins │ v1.37.0 │ 07 Dec 25 23:28 UTC │                     │
	│ stop    │ -p scheduled-stop-899153 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-899153       │ jenkins │ v1.37.0 │ 07 Dec 25 23:28 UTC │ 07 Dec 25 23:28 UTC │
	│ delete  │ -p scheduled-stop-899153                                                                                          │ scheduled-stop-899153       │ jenkins │ v1.37.0 │ 07 Dec 25 23:28 UTC │ 07 Dec 25 23:28 UTC │
	│ start   │ -p insufficient-storage-517111 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio  │ insufficient-storage-517111 │ jenkins │ v1.37.0 │ 07 Dec 25 23:28 UTC │                     │
	│ delete  │ -p insufficient-storage-517111                                                                                    │ insufficient-storage-517111 │ jenkins │ v1.37.0 │ 07 Dec 25 23:29 UTC │ 07 Dec 25 23:29 UTC │
	│ start   │ -p pause-567110 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio         │ pause-567110                │ jenkins │ v1.37.0 │ 07 Dec 25 23:29 UTC │ 07 Dec 25 23:30 UTC │
	│ start   │ -p force-systemd-env-599541 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio        │ force-systemd-env-599541    │ jenkins │ v1.37.0 │ 07 Dec 25 23:29 UTC │ 07 Dec 25 23:29 UTC │
	│ start   │ -p offline-crio-504484 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio │ offline-crio-504484         │ jenkins │ v1.37.0 │ 07 Dec 25 23:29 UTC │ 07 Dec 25 23:30 UTC │
	│ start   │ -p stopped-upgrade-604160 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ stopped-upgrade-604160      │ jenkins │ v1.35.0 │ 07 Dec 25 23:29 UTC │ 07 Dec 25 23:30 UTC │
	│ delete  │ -p force-systemd-env-599541                                                                                       │ force-systemd-env-599541    │ jenkins │ v1.37.0 │ 07 Dec 25 23:29 UTC │ 07 Dec 25 23:29 UTC │
	│ start   │ -p running-upgrade-991102 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ running-upgrade-991102      │ jenkins │ v1.35.0 │ 07 Dec 25 23:29 UTC │ 07 Dec 25 23:30 UTC │
	│ stop    │ stopped-upgrade-604160 stop                                                                                       │ stopped-upgrade-604160      │ jenkins │ v1.35.0 │ 07 Dec 25 23:30 UTC │ 07 Dec 25 23:30 UTC │
	│ start   │ -p pause-567110 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                  │ pause-567110                │ jenkins │ v1.37.0 │ 07 Dec 25 23:30 UTC │ 07 Dec 25 23:30 UTC │
	│ delete  │ -p offline-crio-504484                                                                                            │ offline-crio-504484         │ jenkins │ v1.37.0 │ 07 Dec 25 23:30 UTC │ 07 Dec 25 23:30 UTC │
	│ pause   │ -p pause-567110 --alsologtostderr -v=5                                                                            │ pause-567110                │ jenkins │ v1.37.0 │ 07 Dec 25 23:30 UTC │                     │
	│ start   │ -p missing-upgrade-776369 --memory=3072 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-776369      │ jenkins │ v1.35.0 │ 07 Dec 25 23:30 UTC │                     │
	│ start   │ -p running-upgrade-991102 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio          │ running-upgrade-991102      │ jenkins │ v1.37.0 │ 07 Dec 25 23:30 UTC │                     │
	│ start   │ -p stopped-upgrade-604160 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio          │ stopped-upgrade-604160      │ jenkins │ v1.37.0 │ 07 Dec 25 23:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:30:15
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:30:15.671026  590594 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:30:15.671230  590594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:30:15.671243  590594 out.go:374] Setting ErrFile to fd 2...
	I1207 23:30:15.671249  590594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:30:15.671544  590594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:30:15.672073  590594 out.go:368] Setting JSON to false
	I1207 23:30:15.673226  590594 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7960,"bootTime":1765142256,"procs":271,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:30:15.673304  590594 start.go:143] virtualization: kvm guest
	I1207 23:30:15.676335  590594 out.go:179] * [stopped-upgrade-604160] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:30:15.678243  590594 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:30:15.678271  590594 notify.go:221] Checking for updates...
	I1207 23:30:15.682104  590594 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:30:15.684952  590594 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:30:15.687640  590594 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:30:15.689302  590594 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:30:15.690579  590594 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:30:15.693089  590594 config.go:182] Loaded profile config "stopped-upgrade-604160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1207 23:30:15.696167  590594 out.go:179] * Kubernetes 1.34.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.2
	I1207 23:30:15.698512  590594 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:30:15.736174  590594 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:30:15.736622  590594 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:30:15.824295  590594 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:false NGoroutines:67 SystemTime:2025-12-07 23:30:15.813000862 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:30:15.824514  590594 docker.go:319] overlay module found
	I1207 23:30:15.829057  590594 out.go:179] * Using the docker driver based on existing profile
	I1207 23:30:11.099656  588926 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1207 23:30:11.099938  588926 start.go:159] libmachine.API.Create for "missing-upgrade-776369" (driver="docker")
	I1207 23:30:11.099976  588926 client.go:168] LocalClient.Create starting
	I1207 23:30:11.100063  588926 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem
	I1207 23:30:11.100092  588926 main.go:141] libmachine: Decoding PEM data...
	I1207 23:30:11.100102  588926 main.go:141] libmachine: Parsing certificate...
	I1207 23:30:11.100160  588926 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem
	I1207 23:30:11.100185  588926 main.go:141] libmachine: Decoding PEM data...
	I1207 23:30:11.100194  588926 main.go:141] libmachine: Parsing certificate...
	I1207 23:30:11.100603  588926 cli_runner.go:164] Run: docker network inspect missing-upgrade-776369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1207 23:30:11.118686  588926 cli_runner.go:211] docker network inspect missing-upgrade-776369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1207 23:30:11.118764  588926 network_create.go:284] running [docker network inspect missing-upgrade-776369] to gather additional debugging logs...
	I1207 23:30:11.118796  588926 cli_runner.go:164] Run: docker network inspect missing-upgrade-776369
	W1207 23:30:11.137195  588926 cli_runner.go:211] docker network inspect missing-upgrade-776369 returned with exit code 1
	I1207 23:30:11.137239  588926 network_create.go:287] error running [docker network inspect missing-upgrade-776369]: docker network inspect missing-upgrade-776369: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-776369 not found
	I1207 23:30:11.137252  588926 network_create.go:289] output of [docker network inspect missing-upgrade-776369]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-776369 not found
	
	** /stderr **
	I1207 23:30:11.137390  588926 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:30:11.156787  588926 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-918c8f4f6e86 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:f0:02:fe:94:4b} reservation:<nil>}
	I1207 23:30:11.158242  588926 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ce07fb07c16c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:d2:35:46:a2:0a} reservation:<nil>}
	I1207 23:30:11.159152  588926 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f198eadca31e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:79:39:d6:10:dc} reservation:<nil>}
	I1207 23:30:11.160113  588926 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9637f8924c46 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:36:0a:b8:bd:86:f3} reservation:<nil>}
	I1207 23:30:11.160819  588926 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6269e2196c57 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:fa:5e:a1:e6:ed:18} reservation:<nil>}
	I1207 23:30:11.161775  588926 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d851b0}
	I1207 23:30:11.161797  588926 network_create.go:124] attempt to create docker network missing-upgrade-776369 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1207 23:30:11.161858  588926 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-776369 missing-upgrade-776369
	I1207 23:30:11.215396  588926 network_create.go:108] docker network missing-upgrade-776369 192.168.94.0/24 created
	I1207 23:30:11.215414  588926 kic.go:121] calculated static IP "192.168.94.2" for the "missing-upgrade-776369" container
	I1207 23:30:11.215480  588926 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1207 23:30:11.235301  588926 cli_runner.go:164] Run: docker volume create missing-upgrade-776369 --label name.minikube.sigs.k8s.io=missing-upgrade-776369 --label created_by.minikube.sigs.k8s.io=true
	I1207 23:30:11.254937  588926 oci.go:103] Successfully created a docker volume missing-upgrade-776369
	I1207 23:30:11.255024  588926 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-776369-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-776369 --entrypoint /usr/bin/test -v missing-upgrade-776369:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I1207 23:30:11.984139  588926 oci.go:107] Successfully prepared a docker volume missing-upgrade-776369
	I1207 23:30:11.984177  588926 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1207 23:30:11.984203  588926 kic.go:194] Starting extracting preloaded images to volume ...
	I1207 23:30:11.984299  588926 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-776369:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I1207 23:30:15.701726  588926 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-776369:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (3.717384871s)
	I1207 23:30:15.701811  588926 kic.go:203] duration metric: took 3.717569368s to extract preloaded images to volume ...
	W1207 23:30:15.702107  588926 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1207 23:30:15.702165  588926 oci.go:249] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1207 23:30:15.702214  588926 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1207 23:30:15.795874  588926 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-776369 --name missing-upgrade-776369 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-776369 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-776369 --network missing-upgrade-776369 --ip 192.168.94.2 --volume missing-upgrade-776369:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I1207 23:30:15.833485  590594 start.go:309] selected driver: docker
	I1207 23:30:15.833507  590594 start.go:927] validating driver "docker" against &{Name:stopped-upgrade-604160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-604160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:30:15.833625  590594 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:30:15.834282  590594 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:30:15.905965  590594 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:60 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-07 23:30:15.89500548 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:30:15.906382  590594 cni.go:84] Creating CNI manager for ""
	I1207 23:30:15.906475  590594 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:30:15.906545  590594 start.go:353] cluster config:
	{Name:stopped-upgrade-604160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-604160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:30:15.908667  590594 out.go:179] * Starting "stopped-upgrade-604160" primary control-plane node in "stopped-upgrade-604160" cluster
	I1207 23:30:15.910193  590594 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:30:15.911387  590594 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:30:15.912458  590594 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1207 23:30:15.912528  590594 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1207 23:30:15.912549  590594 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I1207 23:30:15.912556  590594 cache.go:65] Caching tarball of preloaded images
	I1207 23:30:15.912698  590594 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:30:15.912757  590594 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1207 23:30:15.912905  590594 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/stopped-upgrade-604160/config.json ...
	I1207 23:30:15.937548  590594 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I1207 23:30:15.937572  590594 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I1207 23:30:15.937593  590594 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:30:15.937631  590594 start.go:360] acquireMachinesLock for stopped-upgrade-604160: {Name:mk2d29cd2a037638cf4542f6616db838f6265f7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:30:15.937705  590594 start.go:364] duration metric: took 49.178µs to acquireMachinesLock for "stopped-upgrade-604160"
	I1207 23:30:15.937730  590594 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:30:15.937737  590594 fix.go:54] fixHost starting: 
	I1207 23:30:15.938056  590594 cli_runner.go:164] Run: docker container inspect stopped-upgrade-604160 --format={{.State.Status}}
	I1207 23:30:15.960844  590594 fix.go:112] recreateIfNeeded on stopped-upgrade-604160: state=Stopped err=<nil>
	W1207 23:30:15.960877  590594 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.490115299Z" level=info msg="RDT not available in the host system"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.490125085Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.490973493Z" level=info msg="Conmon does support the --sync option"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.490995109Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.491009903Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.491794962Z" level=info msg="Conmon does support the --sync option"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.491811133Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.496418761Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.496450797Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.497030666Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.497480853Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.497551798Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.579048384Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-whwnc Namespace:kube-system ID:16def7bc779e0b7a1e796a2d631da036c0590e5de0e6b8fc2c8c4faa2820522a UID:24fc67f2-09be-4bcb-96d7-59db47d6c5f4 NetNS:/var/run/netns/e3c1df5a-b3ad-4dc9-b2cb-30bcfb01a89d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00078e230}] Aliases:map[]}"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.579272856Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-whwnc for CNI network kindnet (type=ptp)"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.579777886Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.579803981Z" level=info msg="Starting seccomp notifier watcher"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.579881366Z" level=info msg="Create NRI interface"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.580036469Z" level=info msg="built-in NRI default validator is disabled"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.580053266Z" level=info msg="runtime interface created"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.580063117Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.58006858Z" level=info msg="runtime interface starting up..."
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.58007332Z" level=info msg="starting plugins..."
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.580085473Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 07 23:30:07 pause-567110 crio[2176]: time="2025-12-07T23:30:07.580473987Z" level=info msg="No systemd watchdog enabled"
	Dec 07 23:30:07 pause-567110 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	4595c4c008903       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   15 seconds ago      Running             coredns                   0                   16def7bc779e0       coredns-66bc5c9577-whwnc               kube-system
	a860660b71bbc       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   26 seconds ago      Running             kube-proxy                0                   9d66dbff07d4c       kube-proxy-qjmnd                       kube-system
	2b12933a71b85       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   26 seconds ago      Running             kindnet-cni               0                   56f2082678313       kindnet-ddlh6                          kube-system
	1fc2c0c292fee       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   38 seconds ago      Running             kube-scheduler            0                   5f556c4f5c321       kube-scheduler-pause-567110            kube-system
	da3b7831e8edd       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   38 seconds ago      Running             kube-controller-manager   0                   1e46c89a9b6b2       kube-controller-manager-pause-567110   kube-system
	91f7e7211d7b8       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   38 seconds ago      Running             etcd                      0                   98c3c9cae89bb       etcd-pause-567110                      kube-system
	5011fb898c31f       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   38 seconds ago      Running             kube-apiserver            0                   b3783df2e488f       kube-apiserver-pause-567110            kube-system
	
	
	==> coredns [4595c4c00890301c00d701d33c58933a77cd29ab97cd2e5304a777d15cefd0d0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43316 - 48555 "HINFO IN 3103597898622812977.4003902187067832252. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063914178s
	
	
	==> describe nodes <==
	Name:               pause-567110
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-567110
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=pause-567110
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_29_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:29:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-567110
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:30:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:30:06 +0000   Sun, 07 Dec 2025 23:29:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:30:06 +0000   Sun, 07 Dec 2025 23:29:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:30:06 +0000   Sun, 07 Dec 2025 23:29:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:30:06 +0000   Sun, 07 Dec 2025 23:30:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-567110
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                f5df8d5d-bc16-40e6-9b8b-cadca0a058e5
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-whwnc                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-pause-567110                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-ddlh6                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-pause-567110             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-pause-567110    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-qjmnd                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-pause-567110             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node pause-567110 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node pause-567110 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node pause-567110 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node pause-567110 event: Registered Node pause-567110 in Controller
	  Normal  NodeReady                15s   kubelet          Node pause-567110 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.006319] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.495443] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006323] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494714] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006745] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494455] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007157] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493953] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007413] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493695] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007143] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493798] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007702] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493076] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008458] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493060] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008891] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492811] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007996] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493243] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008588] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492559] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008931] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.491699] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.010378] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	
	
	==> etcd [91f7e7211d7b8742ae02b0af28d5305d7fddfa057413826b3a792681a2981e86] <==
	{"level":"warn","ts":"2025-12-07T23:29:41.910862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:41.920189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:41.935241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:41.946524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:41.954121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:41.961728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:41.968699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:41.976050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:41.983749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:41.997528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.004595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.012268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.019317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.027404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.034378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.042908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.061781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.064988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.072961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.079884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.087245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.103864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.110482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.118065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:29:42.187645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57248","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:30:17 up  2:12,  0 user,  load average: 3.14, 1.62, 1.46
	Linux pause-567110 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2b12933a71b850c78d62f283e85fc636b88a29fd602da8d9655a289c5b8af04d] <==
	I1207 23:29:51.357568       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 23:29:51.357824       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1207 23:29:51.358007       1 main.go:148] setting mtu 1500 for CNI 
	I1207 23:29:51.358030       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 23:29:51.358061       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T23:29:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 23:29:51.562688       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 23:29:51.562716       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 23:29:51.562732       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 23:29:51.562891       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 23:29:51.863843       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 23:29:51.863870       1 metrics.go:72] Registering metrics
	I1207 23:29:51.863987       1 controller.go:711] "Syncing nftables rules"
	I1207 23:30:01.564490       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1207 23:30:01.564575       1 main.go:301] handling current node
	I1207 23:30:11.567470       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1207 23:30:11.567515       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5011fb898c31f28341021b2a0ef4276eb0260fd3056ad0df530c70696174d1b8] <==
	I1207 23:29:42.799448       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1207 23:29:42.801132       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 23:29:42.806375       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1207 23:29:42.806414       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1207 23:29:42.815651       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1207 23:29:42.815691       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:29:42.823841       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:29:42.823847       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1207 23:29:43.701561       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1207 23:29:43.705931       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1207 23:29:43.705951       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1207 23:29:44.281053       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 23:29:44.324592       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 23:29:44.404853       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1207 23:29:44.410829       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1207 23:29:44.411917       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 23:29:44.417452       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:29:44.746981       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 23:29:45.581282       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 23:29:45.594671       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1207 23:29:45.604371       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 23:29:50.557957       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 23:29:50.665580       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:29:50.674041       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:29:50.804362       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [da3b7831e8edd04c91875054bbf6f2f81ca02cd681b035e3d8c5dbf875fbe218] <==
	I1207 23:29:49.747594       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1207 23:29:49.747605       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1207 23:29:49.747682       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1207 23:29:49.747797       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1207 23:29:49.747857       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1207 23:29:49.747595       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1207 23:29:49.747915       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1207 23:29:49.748065       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1207 23:29:49.748481       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1207 23:29:49.748579       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1207 23:29:49.750239       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1207 23:29:49.751855       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1207 23:29:49.751940       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1207 23:29:49.751989       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1207 23:29:49.752000       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1207 23:29:49.752006       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1207 23:29:49.753081       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1207 23:29:49.757431       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 23:29:49.762298       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-567110" podCIDRs=["10.244.0.0/24"]
	I1207 23:29:49.772211       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 23:29:49.785434       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1207 23:29:49.785459       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1207 23:29:49.785471       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1207 23:29:49.795308       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1207 23:30:04.698471       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a860660b71bbc8c017e2fb5454ef22c9f822a2ada1b89b2183e9f7d7909a1349] <==
	I1207 23:29:51.269945       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:29:51.342296       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 23:29:51.443133       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 23:29:51.443170       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1207 23:29:51.443467       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:29:51.465074       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:29:51.465165       1 server_linux.go:132] "Using iptables Proxier"
	I1207 23:29:51.470757       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:29:51.471209       1 server.go:527] "Version info" version="v1.34.2"
	I1207 23:29:51.471247       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:29:51.472906       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:29:51.472930       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:29:51.472975       1 config.go:200] "Starting service config controller"
	I1207 23:29:51.472982       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:29:51.473178       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:29:51.473185       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:29:51.473410       1 config.go:309] "Starting node config controller"
	I1207 23:29:51.473425       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:29:51.473432       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:29:51.573026       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 23:29:51.573072       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 23:29:51.573283       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [1fc2c0c292feead088aa3d54e0da73fa07c7f9e1766d492c830da817741b7757] <==
	E1207 23:29:42.790025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 23:29:42.790059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1207 23:29:42.790070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1207 23:29:42.790117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 23:29:42.790155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1207 23:29:42.790172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1207 23:29:42.790168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1207 23:29:42.790546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1207 23:29:42.790632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1207 23:29:42.790742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1207 23:29:42.790780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1207 23:29:42.790859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1207 23:29:43.676701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1207 23:29:43.728192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1207 23:29:43.736318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1207 23:29:43.739615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1207 23:29:43.745878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1207 23:29:43.751055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1207 23:29:43.800132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1207 23:29:43.830613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1207 23:29:43.872898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 23:29:43.904395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1207 23:29:43.948568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1207 23:29:44.016165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1207 23:29:46.584821       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 07 23:29:50 pause-567110 kubelet[1336]: I1207 23:29:50.864258    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30eda86a-b6b3-42c8-95c5-fe75c3e3ce7f-xtables-lock\") pod \"kindnet-ddlh6\" (UID: \"30eda86a-b6b3-42c8-95c5-fe75c3e3ce7f\") " pod="kube-system/kindnet-ddlh6"
	Dec 07 23:29:50 pause-567110 kubelet[1336]: I1207 23:29:50.864298    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv78f\" (UniqueName: \"kubernetes.io/projected/30eda86a-b6b3-42c8-95c5-fe75c3e3ce7f-kube-api-access-mv78f\") pod \"kindnet-ddlh6\" (UID: \"30eda86a-b6b3-42c8-95c5-fe75c3e3ce7f\") " pod="kube-system/kindnet-ddlh6"
	Dec 07 23:29:50 pause-567110 kubelet[1336]: I1207 23:29:50.864366    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30eda86a-b6b3-42c8-95c5-fe75c3e3ce7f-lib-modules\") pod \"kindnet-ddlh6\" (UID: \"30eda86a-b6b3-42c8-95c5-fe75c3e3ce7f\") " pod="kube-system/kindnet-ddlh6"
	Dec 07 23:29:50 pause-567110 kubelet[1336]: I1207 23:29:50.965773    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9cc4c179-84c0-48e9-83f8-9c2334b7f51f-kube-proxy\") pod \"kube-proxy-qjmnd\" (UID: \"9cc4c179-84c0-48e9-83f8-9c2334b7f51f\") " pod="kube-system/kube-proxy-qjmnd"
	Dec 07 23:29:50 pause-567110 kubelet[1336]: I1207 23:29:50.966131    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9cc4c179-84c0-48e9-83f8-9c2334b7f51f-xtables-lock\") pod \"kube-proxy-qjmnd\" (UID: \"9cc4c179-84c0-48e9-83f8-9c2334b7f51f\") " pod="kube-system/kube-proxy-qjmnd"
	Dec 07 23:29:50 pause-567110 kubelet[1336]: I1207 23:29:50.966165    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cc4c179-84c0-48e9-83f8-9c2334b7f51f-lib-modules\") pod \"kube-proxy-qjmnd\" (UID: \"9cc4c179-84c0-48e9-83f8-9c2334b7f51f\") " pod="kube-system/kube-proxy-qjmnd"
	Dec 07 23:29:50 pause-567110 kubelet[1336]: I1207 23:29:50.966197    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8r6d\" (UniqueName: \"kubernetes.io/projected/9cc4c179-84c0-48e9-83f8-9c2334b7f51f-kube-api-access-r8r6d\") pod \"kube-proxy-qjmnd\" (UID: \"9cc4c179-84c0-48e9-83f8-9c2334b7f51f\") " pod="kube-system/kube-proxy-qjmnd"
	Dec 07 23:29:51 pause-567110 kubelet[1336]: I1207 23:29:51.487493    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qjmnd" podStartSLOduration=1.487472108 podStartE2EDuration="1.487472108s" podCreationTimestamp="2025-12-07 23:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:29:51.487138501 +0000 UTC m=+6.148050669" watchObservedRunningTime="2025-12-07 23:29:51.487472108 +0000 UTC m=+6.148384275"
	Dec 07 23:29:51 pause-567110 kubelet[1336]: I1207 23:29:51.497104    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-ddlh6" podStartSLOduration=1.4970776639999999 podStartE2EDuration="1.497077664s" podCreationTimestamp="2025-12-07 23:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:29:51.496902144 +0000 UTC m=+6.157814323" watchObservedRunningTime="2025-12-07 23:29:51.497077664 +0000 UTC m=+6.157989832"
	Dec 07 23:30:02 pause-567110 kubelet[1336]: I1207 23:30:02.014448    1336 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 07 23:30:02 pause-567110 kubelet[1336]: I1207 23:30:02.143178    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfn55\" (UniqueName: \"kubernetes.io/projected/24fc67f2-09be-4bcb-96d7-59db47d6c5f4-kube-api-access-pfn55\") pod \"coredns-66bc5c9577-whwnc\" (UID: \"24fc67f2-09be-4bcb-96d7-59db47d6c5f4\") " pod="kube-system/coredns-66bc5c9577-whwnc"
	Dec 07 23:30:02 pause-567110 kubelet[1336]: I1207 23:30:02.143241    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24fc67f2-09be-4bcb-96d7-59db47d6c5f4-config-volume\") pod \"coredns-66bc5c9577-whwnc\" (UID: \"24fc67f2-09be-4bcb-96d7-59db47d6c5f4\") " pod="kube-system/coredns-66bc5c9577-whwnc"
	Dec 07 23:30:02 pause-567110 kubelet[1336]: I1207 23:30:02.517819    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-whwnc" podStartSLOduration=12.517798536 podStartE2EDuration="12.517798536s" podCreationTimestamp="2025-12-07 23:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:30:02.517794754 +0000 UTC m=+17.178706931" watchObservedRunningTime="2025-12-07 23:30:02.517798536 +0000 UTC m=+17.178710705"
	Dec 07 23:30:07 pause-567110 kubelet[1336]: W1207 23:30:07.448558    1336 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 07 23:30:07 pause-567110 kubelet[1336]: E1207 23:30:07.448675    1336 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Dec 07 23:30:07 pause-567110 kubelet[1336]: E1207 23:30:07.448766    1336 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 07 23:30:07 pause-567110 kubelet[1336]: E1207 23:30:07.448798    1336 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 07 23:30:07 pause-567110 kubelet[1336]: E1207 23:30:07.448817    1336 kubelet.go:2614] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 07 23:30:07 pause-567110 kubelet[1336]: E1207 23:30:07.519219    1336 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 07 23:30:07 pause-567110 kubelet[1336]: E1207 23:30:07.519304    1336 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 07 23:30:07 pause-567110 kubelet[1336]: E1207 23:30:07.519372    1336 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 07 23:30:11 pause-567110 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 07 23:30:11 pause-567110 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 07 23:30:11 pause-567110 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 07 23:30:11 pause-567110 systemd[1]: kubelet.service: Consumed 1.256s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-567110 -n pause-567110
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-567110 -n pause-567110: exit status 2 (350.579438ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-567110 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-320477 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-320477 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (264.035189ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:34:19Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-320477 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-320477 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-320477 describe deploy/metrics-server -n kube-system: exit status 1 (68.601975ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-320477 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-320477
helpers_test.go:243: (dbg) docker inspect old-k8s-version-320477:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "06913e870114853a6134a49eb080ad75cbade550da3920f3ac120370ad522f60",
	        "Created": "2025-12-07T23:33:24.406627697Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 632125,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:33:24.44089131Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/06913e870114853a6134a49eb080ad75cbade550da3920f3ac120370ad522f60/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/06913e870114853a6134a49eb080ad75cbade550da3920f3ac120370ad522f60/hostname",
	        "HostsPath": "/var/lib/docker/containers/06913e870114853a6134a49eb080ad75cbade550da3920f3ac120370ad522f60/hosts",
	        "LogPath": "/var/lib/docker/containers/06913e870114853a6134a49eb080ad75cbade550da3920f3ac120370ad522f60/06913e870114853a6134a49eb080ad75cbade550da3920f3ac120370ad522f60-json.log",
	        "Name": "/old-k8s-version-320477",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-320477:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-320477",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "06913e870114853a6134a49eb080ad75cbade550da3920f3ac120370ad522f60",
	                "LowerDir": "/var/lib/docker/overlay2/acd9d1d66636fbbdfd34477ab909bc56ba8678951aa24f32a68daf160b304ed3-init/diff:/var/lib/docker/overlay2/d2e9c5481c0f5ed3745e4b3c85b207e8e3f273f5a1d285f7bc7bfa20976ad16e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/acd9d1d66636fbbdfd34477ab909bc56ba8678951aa24f32a68daf160b304ed3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/acd9d1d66636fbbdfd34477ab909bc56ba8678951aa24f32a68daf160b304ed3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/acd9d1d66636fbbdfd34477ab909bc56ba8678951aa24f32a68daf160b304ed3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-320477",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-320477/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-320477",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-320477",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-320477",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c10a22acb6cf90abfb054572d640ba74b42a3a43132a15aadcdcf573a5e9233d",
	            "SandboxKey": "/var/run/docker/netns/c10a22acb6cf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-320477": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "79f54ad63e607736183a174ecfbd71671c6240b2d3072bbde0376d130c69013c",
	                    "EndpointID": "117c8f07d0687a4bd5d30191947bb4614b3cc8d86fe702c0f8ea5c6bbe55f7b8",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "f6:8f:e7:3c:f0:f0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-320477",
	                        "06913e870114"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-320477 -n old-k8s-version-320477
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-320477 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-320477 logs -n 25: (1.110888854s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-600852 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                        │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                         │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo docker system info                                                                                                                                                                                                      │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo containerd config dump                                                                                                                                                                                                  │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo crio config                                                                                                                                                                                                             │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ delete  │ -p cilium-600852                                                                                                                                                                                                                              │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:33 UTC │
	│ start   │ -p old-k8s-version-320477 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-320477 │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:34 UTC │
	│ start   │ -p cert-expiration-612608 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-612608 │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:33 UTC │
	│ delete  │ -p cert-expiration-612608                                                                                                                                                                                                                     │ cert-expiration-612608 │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:33 UTC │
	│ start   │ -p no-preload-313006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-313006      │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-320477 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-320477 │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:33:55
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:33:55.785914  638483 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:33:55.786180  638483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:33:55.786190  638483 out.go:374] Setting ErrFile to fd 2...
	I1207 23:33:55.786195  638483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:33:55.786398  638483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:33:55.786880  638483 out.go:368] Setting JSON to false
	I1207 23:33:55.788014  638483 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8180,"bootTime":1765142256,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:33:55.788075  638483 start.go:143] virtualization: kvm guest
	I1207 23:33:55.790395  638483 out.go:179] * [no-preload-313006] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:33:55.791880  638483 notify.go:221] Checking for updates...
	I1207 23:33:55.791916  638483 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:33:55.793306  638483 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:33:55.794801  638483 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:33:55.796229  638483 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:33:55.797761  638483 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:33:55.799270  638483 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:33:55.801045  638483 config.go:182] Loaded profile config "kubernetes-upgrade-703538": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:33:55.801157  638483 config.go:182] Loaded profile config "old-k8s-version-320477": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1207 23:33:55.801230  638483 config.go:182] Loaded profile config "stopped-upgrade-604160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1207 23:33:55.801363  638483 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:33:55.824780  638483 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:33:55.824877  638483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:33:55.897319  638483 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-07 23:33:55.887312226 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:33:55.897450  638483 docker.go:319] overlay module found
	I1207 23:33:55.899254  638483 out.go:179] * Using the docker driver based on user configuration
	I1207 23:33:55.900505  638483 start.go:309] selected driver: docker
	I1207 23:33:55.900527  638483 start.go:927] validating driver "docker" against <nil>
	I1207 23:33:55.900540  638483 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:33:55.901087  638483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:33:55.954769  638483 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-07 23:33:55.945576062 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:33:55.954942  638483 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1207 23:33:55.955181  638483 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:33:55.957173  638483 out.go:179] * Using Docker driver with root privileges
	I1207 23:33:55.958433  638483 cni.go:84] Creating CNI manager for ""
	I1207 23:33:55.958501  638483 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:33:55.958513  638483 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1207 23:33:55.958610  638483 start.go:353] cluster config:
	{Name:no-preload-313006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-313006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:33:55.961031  638483 out.go:179] * Starting "no-preload-313006" primary control-plane node in "no-preload-313006" cluster
	I1207 23:33:55.962458  638483 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:33:55.963890  638483 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:33:55.965029  638483 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1207 23:33:55.965137  638483 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:33:55.965178  638483 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/config.json ...
	I1207 23:33:55.965225  638483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/config.json: {Name:mk0b473e117c7b7a372b31fc0beb1fa58b189d36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:33:55.965448  638483 cache.go:107] acquiring lock: {Name:mk35f35d02b51e73648018346caa8577bcb02423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:33:55.965460  638483 cache.go:107] acquiring lock: {Name:mk073566b0fe2be152587ae35afb0e7b5e91cd92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:33:55.965442  638483 cache.go:107] acquiring lock: {Name:mkbd6b49f7665e4f1e59327a6638af64accfbd8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:33:55.965487  638483 cache.go:107] acquiring lock: {Name:mk9827fb3e41345bba396b2d0abebc9c76ae1b5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:33:55.965495  638483 cache.go:107] acquiring lock: {Name:mke7b5e65769096d2da605e337724f9c23cd0a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:33:55.965538  638483 cache.go:107] acquiring lock: {Name:mk187eff8ce17bd71a4f3c7c012208c9c4122014 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:33:55.965558  638483 cache.go:107] acquiring lock: {Name:mk6e7f82161fd3b4764748eae2defc53fa3a2d89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:33:55.965606  638483 cache.go:107] acquiring lock: {Name:mkc02ccbaf1950fb11a48894c61699039caba7ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:33:55.965657  638483 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1207 23:33:55.965657  638483 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1207 23:33:55.965703  638483 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1207 23:33:55.965744  638483 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1207 23:33:55.965775  638483 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 23:33:55.965805  638483 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1207 23:33:55.965938  638483 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1207 23:33:55.965690  638483 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1207 23:33:55.967065  638483 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1207 23:33:55.967067  638483 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1207 23:33:55.967148  638483 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1207 23:33:55.967178  638483 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1207 23:33:55.967188  638483 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1207 23:33:55.967190  638483 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1207 23:33:55.967069  638483 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 23:33:55.967301  638483 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1207 23:33:55.988952  638483 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:33:55.988973  638483 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:33:55.988990  638483 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:33:55.989022  638483 start.go:360] acquireMachinesLock for no-preload-313006: {Name:mk5eb3348861def558ca942a9632e734d86e74b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:33:55.989113  638483 start.go:364] duration metric: took 76.581µs to acquireMachinesLock for "no-preload-313006"
	I1207 23:33:55.989135  638483 start.go:93] Provisioning new machine with config: &{Name:no-preload-313006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-313006 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:33:55.989203  638483 start.go:125] createHost starting for "" (driver="docker")
	I1207 23:33:53.577241  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:33:53.577848  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:33:53.577905  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:33:53.577952  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:33:53.607049  610371 cri.go:89] found id: "4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b"
	I1207 23:33:53.607071  610371 cri.go:89] found id: ""
	I1207 23:33:53.607079  610371 logs.go:282] 1 containers: [4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b]
	I1207 23:33:53.607139  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:33:53.611217  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:33:53.611284  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:33:53.638729  610371 cri.go:89] found id: ""
	I1207 23:33:53.638753  610371 logs.go:282] 0 containers: []
	W1207 23:33:53.638760  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:33:53.638767  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:33:53.638830  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:33:53.666759  610371 cri.go:89] found id: ""
	I1207 23:33:53.666790  610371 logs.go:282] 0 containers: []
	W1207 23:33:53.666799  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:33:53.666806  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:33:53.666854  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:33:53.696172  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:33:53.696192  610371 cri.go:89] found id: ""
	I1207 23:33:53.696200  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:33:53.696248  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:33:53.700922  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:33:53.700999  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:33:53.729395  610371 cri.go:89] found id: ""
	I1207 23:33:53.729422  610371 logs.go:282] 0 containers: []
	W1207 23:33:53.729432  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:33:53.729441  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:33:53.729510  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:33:53.760154  610371 cri.go:89] found id: "6ea869de482b1e9d4029430cb082e602d945922717fd8de66a9407dc0cdf6778"
	I1207 23:33:53.760178  610371 cri.go:89] found id: ""
	I1207 23:33:53.760187  610371 logs.go:282] 1 containers: [6ea869de482b1e9d4029430cb082e602d945922717fd8de66a9407dc0cdf6778]
	I1207 23:33:53.760240  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:33:53.764406  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:33:53.764493  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:33:53.791972  610371 cri.go:89] found id: ""
	I1207 23:33:53.792006  610371 logs.go:282] 0 containers: []
	W1207 23:33:53.792017  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:33:53.792025  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:33:53.792095  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:33:53.820636  610371 cri.go:89] found id: ""
	I1207 23:33:53.820660  610371 logs.go:282] 0 containers: []
	W1207 23:33:53.820667  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:33:53.820679  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:33:53.820692  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:33:53.848796  610371 logs.go:123] Gathering logs for kube-controller-manager [6ea869de482b1e9d4029430cb082e602d945922717fd8de66a9407dc0cdf6778] ...
	I1207 23:33:53.848830  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ea869de482b1e9d4029430cb082e602d945922717fd8de66a9407dc0cdf6778"
	I1207 23:33:53.876556  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:33:53.876586  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:33:53.922231  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:33:53.922269  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:33:53.954026  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:33:53.954055  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:33:54.036036  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:33:54.036075  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:33:54.068205  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:33:54.068242  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:33:54.143122  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:33:54.143144  610371 logs.go:123] Gathering logs for kube-apiserver [4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b] ...
	I1207 23:33:54.143160  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b"
	I1207 23:33:56.687474  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:33:56.687961  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:33:56.688026  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:33:56.688093  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:33:56.721683  610371 cri.go:89] found id: "4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b"
	I1207 23:33:56.721712  610371 cri.go:89] found id: ""
	I1207 23:33:56.721724  610371 logs.go:282] 1 containers: [4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b]
	I1207 23:33:56.721786  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:33:56.726350  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:33:56.726430  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:33:56.756573  610371 cri.go:89] found id: ""
	I1207 23:33:56.756600  610371 logs.go:282] 0 containers: []
	W1207 23:33:56.756610  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:33:56.756619  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:33:56.756680  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:33:56.786923  610371 cri.go:89] found id: ""
	I1207 23:33:56.786953  610371 logs.go:282] 0 containers: []
	W1207 23:33:56.786964  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:33:56.786973  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:33:56.787037  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:33:56.817482  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:33:56.817509  610371 cri.go:89] found id: ""
	I1207 23:33:56.817520  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:33:56.817577  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:33:56.822199  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:33:56.822268  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:33:56.851968  610371 cri.go:89] found id: ""
	I1207 23:33:56.851996  610371 logs.go:282] 0 containers: []
	W1207 23:33:56.852008  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:33:56.852017  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:33:56.852078  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:33:56.885659  610371 cri.go:89] found id: "6ea869de482b1e9d4029430cb082e602d945922717fd8de66a9407dc0cdf6778"
	I1207 23:33:56.885684  610371 cri.go:89] found id: ""
	I1207 23:33:56.885694  610371 logs.go:282] 1 containers: [6ea869de482b1e9d4029430cb082e602d945922717fd8de66a9407dc0cdf6778]
	I1207 23:33:56.885762  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:33:56.889960  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:33:56.890034  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:33:56.921909  610371 cri.go:89] found id: ""
	I1207 23:33:56.921967  610371 logs.go:282] 0 containers: []
	W1207 23:33:56.921979  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:33:56.921987  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:33:56.922041  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:33:56.952759  610371 cri.go:89] found id: ""
	I1207 23:33:56.952794  610371 logs.go:282] 0 containers: []
	W1207 23:33:56.952806  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:33:56.952820  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:33:56.952837  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:33:57.019693  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:33:57.019715  610371 logs.go:123] Gathering logs for kube-apiserver [4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b] ...
	I1207 23:33:57.019732  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b"
	I1207 23:33:57.062528  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:33:57.062565  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:33:57.096229  610371 logs.go:123] Gathering logs for kube-controller-manager [6ea869de482b1e9d4029430cb082e602d945922717fd8de66a9407dc0cdf6778] ...
	I1207 23:33:57.096269  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ea869de482b1e9d4029430cb082e602d945922717fd8de66a9407dc0cdf6778"
	I1207 23:33:57.134177  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:33:57.134215  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:33:57.185080  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:33:57.185123  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:33:57.230604  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:33:57.230641  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:33:57.330987  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:33:57.331020  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:33:53.524762  631235 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-320477" context rescaled to 1 replicas
	W1207 23:33:55.025189  631235 node_ready.go:57] node "old-k8s-version-320477" has "Ready":"False" status (will retry)
	W1207 23:33:57.026129  631235 node_ready.go:57] node "old-k8s-version-320477" has "Ready":"False" status (will retry)
	I1207 23:33:55.991411  638483 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1207 23:33:55.991624  638483 start.go:159] libmachine.API.Create for "no-preload-313006" (driver="docker")
	I1207 23:33:55.991655  638483 client.go:173] LocalClient.Create starting
	I1207 23:33:55.991728  638483 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem
	I1207 23:33:55.991760  638483 main.go:143] libmachine: Decoding PEM data...
	I1207 23:33:55.991779  638483 main.go:143] libmachine: Parsing certificate...
	I1207 23:33:55.991836  638483 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem
	I1207 23:33:55.991856  638483 main.go:143] libmachine: Decoding PEM data...
	I1207 23:33:55.991865  638483 main.go:143] libmachine: Parsing certificate...
	I1207 23:33:55.992208  638483 cli_runner.go:164] Run: docker network inspect no-preload-313006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1207 23:33:56.012781  638483 cli_runner.go:211] docker network inspect no-preload-313006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1207 23:33:56.012853  638483 network_create.go:284] running [docker network inspect no-preload-313006] to gather additional debugging logs...
	I1207 23:33:56.012869  638483 cli_runner.go:164] Run: docker network inspect no-preload-313006
	W1207 23:33:56.032844  638483 cli_runner.go:211] docker network inspect no-preload-313006 returned with exit code 1
	I1207 23:33:56.032880  638483 network_create.go:287] error running [docker network inspect no-preload-313006]: docker network inspect no-preload-313006: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-313006 not found
	I1207 23:33:56.032901  638483 network_create.go:289] output of [docker network inspect no-preload-313006]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-313006 not found
	
	** /stderr **
	I1207 23:33:56.033001  638483 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:33:56.051643  638483 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-918c8f4f6e86 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:f0:02:fe:94:4b} reservation:<nil>}
	I1207 23:33:56.052420  638483 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ce07fb07c16c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:d2:35:46:a2:0a} reservation:<nil>}
	I1207 23:33:56.052845  638483 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f198eadca31e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:79:39:d6:10:dc} reservation:<nil>}
	I1207 23:33:56.053383  638483 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0a95fdba7084 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:86:aa:af:1f:07:11} reservation:<nil>}
	I1207 23:33:56.054052  638483 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014f2eb0}
	I1207 23:33:56.054083  638483 network_create.go:124] attempt to create docker network no-preload-313006 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1207 23:33:56.054131  638483 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-313006 no-preload-313006
	I1207 23:33:56.109726  638483 network_create.go:108] docker network no-preload-313006 192.168.85.0/24 created
	I1207 23:33:56.109759  638483 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-313006" container
	I1207 23:33:56.109834  638483 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1207 23:33:56.128972  638483 cli_runner.go:164] Run: docker volume create no-preload-313006 --label name.minikube.sigs.k8s.io=no-preload-313006 --label created_by.minikube.sigs.k8s.io=true
	I1207 23:33:56.148131  638483 oci.go:103] Successfully created a docker volume no-preload-313006
	I1207 23:33:56.148215  638483 cli_runner.go:164] Run: docker run --rm --name no-preload-313006-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-313006 --entrypoint /usr/bin/test -v no-preload-313006:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1207 23:33:56.171609  638483 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1207 23:33:56.179108  638483 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1207 23:33:56.189831  638483 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1207 23:33:56.206735  638483 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1207 23:33:56.257102  638483 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1207 23:33:56.283666  638483 cache.go:157] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1207 23:33:56.283693  638483 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 318.234806ms
	I1207 23:33:56.283709  638483 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1207 23:33:56.285917  638483 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1207 23:33:56.297825  638483 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1207 23:33:56.599394  638483 cache.go:157] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1207 23:33:56.599422  638483 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 633.936031ms
	I1207 23:33:56.599438  638483 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1207 23:33:56.660070  638483 oci.go:107] Successfully prepared a docker volume no-preload-313006
	I1207 23:33:56.660117  638483 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1207 23:33:56.660195  638483 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1207 23:33:56.660220  638483 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1207 23:33:56.660291  638483 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1207 23:33:56.723798  638483 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-313006 --name no-preload-313006 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-313006 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-313006 --network no-preload-313006 --ip 192.168.85.2 --volume no-preload-313006:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1207 23:33:57.032119  638483 cli_runner.go:164] Run: docker container inspect no-preload-313006 --format={{.State.Running}}
	I1207 23:33:57.055972  638483 cli_runner.go:164] Run: docker container inspect no-preload-313006 --format={{.State.Status}}
	I1207 23:33:57.080950  638483 cli_runner.go:164] Run: docker exec no-preload-313006 stat /var/lib/dpkg/alternatives/iptables
	I1207 23:33:57.134249  638483 oci.go:144] the created container "no-preload-313006" has a running status.
	I1207 23:33:57.134281  638483 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa...
	I1207 23:33:57.275974  638483 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1207 23:33:57.477044  638483 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1207 23:33:57.519457  638483 cli_runner.go:164] Run: docker container inspect no-preload-313006 --format={{.State.Status}}
	I1207 23:33:57.546772  638483 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1207 23:33:57.546801  638483 kic_runner.go:114] Args: [docker exec --privileged no-preload-313006 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1207 23:33:57.600885  638483 cache.go:157] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1207 23:33:57.600927  638483 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.635427201s
	I1207 23:33:57.600950  638483 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1207 23:33:57.604837  638483 cache.go:157] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1207 23:33:57.604877  638483 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.639323511s
	I1207 23:33:57.604897  638483 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1207 23:33:57.611468  638483 cli_runner.go:164] Run: docker container inspect no-preload-313006 --format={{.State.Status}}
	I1207 23:33:57.615811  638483 cache.go:157] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1207 23:33:57.615839  638483 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.65041582s
	I1207 23:33:57.615854  638483 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1207 23:33:57.634614  638483 machine.go:94] provisionDockerMachine start ...
	I1207 23:33:57.634724  638483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:33:57.659942  638483 main.go:143] libmachine: Using SSH client type: native
	I1207 23:33:57.660269  638483 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1207 23:33:57.660292  638483 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:33:57.676957  638483 cache.go:157] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1207 23:33:57.676989  638483 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.711498201s
	I1207 23:33:57.677002  638483 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1207 23:33:57.741132  638483 cache.go:157] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1207 23:33:57.741159  638483 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.775736604s
	I1207 23:33:57.741171  638483 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1207 23:33:57.794675  638483 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-313006
	
	I1207 23:33:57.794705  638483 ubuntu.go:182] provisioning hostname "no-preload-313006"
	I1207 23:33:57.794786  638483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:33:57.816356  638483 main.go:143] libmachine: Using SSH client type: native
	I1207 23:33:57.816685  638483 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1207 23:33:57.816710  638483 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-313006 && echo "no-preload-313006" | sudo tee /etc/hostname
	I1207 23:33:57.891753  638483 cache.go:157] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1207 23:33:57.891783  638483 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 1.926330254s
	I1207 23:33:57.891795  638483 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1207 23:33:57.891813  638483 cache.go:87] Successfully saved all images to host disk.
	I1207 23:33:57.964754  638483 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-313006
	
	I1207 23:33:57.964853  638483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:33:57.984750  638483 main.go:143] libmachine: Using SSH client type: native
	I1207 23:33:57.985002  638483 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1207 23:33:57.985028  638483 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-313006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-313006/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-313006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:33:58.120385  638483 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:33:58.120430  638483 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:33:58.120455  638483 ubuntu.go:190] setting up certificates
	I1207 23:33:58.120467  638483 provision.go:84] configureAuth start
	I1207 23:33:58.120538  638483 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-313006
	I1207 23:33:58.141619  638483 provision.go:143] copyHostCerts
	I1207 23:33:58.141677  638483 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:33:58.141685  638483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:33:58.141754  638483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:33:58.141857  638483 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:33:58.141873  638483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:33:58.141900  638483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:33:58.141956  638483 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:33:58.141963  638483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:33:58.141987  638483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:33:58.142041  638483 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.no-preload-313006 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-313006]
	I1207 23:33:58.313570  638483 provision.go:177] copyRemoteCerts
	I1207 23:33:58.313668  638483 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:33:58.313710  638483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:33:58.333722  638483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:33:58.432192  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:33:58.453267  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1207 23:33:58.472142  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 23:33:58.492082  638483 provision.go:87] duration metric: took 371.59795ms to configureAuth
	I1207 23:33:58.492117  638483 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:33:58.492316  638483 config.go:182] Loaded profile config "no-preload-313006": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:33:58.492454  638483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:33:58.514725  638483 main.go:143] libmachine: Using SSH client type: native
	I1207 23:33:58.514946  638483 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1207 23:33:58.514962  638483 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:33:58.788851  638483 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:33:58.788879  638483 machine.go:97] duration metric: took 1.154242132s to provisionDockerMachine
	I1207 23:33:58.788892  638483 client.go:176] duration metric: took 2.797229294s to LocalClient.Create
	I1207 23:33:58.788921  638483 start.go:167] duration metric: took 2.797296799s to libmachine.API.Create "no-preload-313006"
	I1207 23:33:58.788930  638483 start.go:293] postStartSetup for "no-preload-313006" (driver="docker")
	I1207 23:33:58.788950  638483 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:33:58.789009  638483 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:33:58.789056  638483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:33:58.807717  638483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:33:58.904099  638483 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:33:58.907961  638483 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:33:58.908000  638483 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:33:58.908015  638483 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:33:58.908075  638483 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:33:58.908168  638483 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:33:58.908280  638483 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:33:58.916720  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:33:58.937889  638483 start.go:296] duration metric: took 148.937535ms for postStartSetup
	I1207 23:33:58.938322  638483 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-313006
	I1207 23:33:58.957436  638483 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/config.json ...
	I1207 23:33:58.957745  638483 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:33:58.957818  638483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:33:58.976432  638483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:33:59.067655  638483 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:33:59.072306  638483 start.go:128] duration metric: took 3.083085274s to createHost
	I1207 23:33:59.072344  638483 start.go:83] releasing machines lock for "no-preload-313006", held for 3.083220283s
	I1207 23:33:59.072425  638483 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-313006
	I1207 23:33:59.090860  638483 ssh_runner.go:195] Run: cat /version.json
	I1207 23:33:59.090908  638483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:33:59.090979  638483 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:33:59.091072  638483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:33:59.110190  638483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:33:59.110591  638483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:33:59.260387  638483 ssh_runner.go:195] Run: systemctl --version
	I1207 23:33:59.267176  638483 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:33:59.300669  638483 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:33:59.305464  638483 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:33:59.305540  638483 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:33:59.332164  638483 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 23:33:59.332187  638483 start.go:496] detecting cgroup driver to use...
	I1207 23:33:59.332223  638483 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:33:59.332271  638483 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:33:59.348449  638483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:33:59.361873  638483 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:33:59.361941  638483 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:33:59.379273  638483 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:33:59.398461  638483 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:33:59.485396  638483 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:33:59.576998  638483 docker.go:234] disabling docker service ...
	I1207 23:33:59.577052  638483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:33:59.596470  638483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:33:59.610435  638483 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:33:59.700934  638483 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:33:59.794494  638483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:33:59.807713  638483 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:33:59.822194  638483 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:33:59.822256  638483 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:33:59.832908  638483 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:33:59.832985  638483 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:33:59.842430  638483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:33:59.851872  638483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:33:59.861365  638483 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:33:59.869804  638483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:33:59.879360  638483 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:33:59.894697  638483 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:33:59.904828  638483 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:33:59.913920  638483 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:33:59.923144  638483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:34:00.020406  638483 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:34:00.322306  638483 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:34:00.322403  638483 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:34:00.326890  638483 start.go:564] Will wait 60s for crictl version
	I1207 23:34:00.327077  638483 ssh_runner.go:195] Run: which crictl
	I1207 23:34:00.331752  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:34:00.359544  638483 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:34:00.359634  638483 ssh_runner.go:195] Run: crio --version
	I1207 23:34:00.393792  638483 ssh_runner.go:195] Run: crio --version
	I1207 23:34:00.427251  638483 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1207 23:33:57.753386  590594 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1207 23:33:57.753815  590594 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1207 23:33:57.753870  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:33:57.753920  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:33:57.791737  590594 cri.go:89] found id: "103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2"
	I1207 23:33:57.791765  590594 cri.go:89] found id: ""
	I1207 23:33:57.791775  590594 logs.go:282] 1 containers: [103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2]
	I1207 23:33:57.791832  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:33:57.796911  590594 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:33:57.796987  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:33:57.837530  590594 cri.go:89] found id: "f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac"
	I1207 23:33:57.837555  590594 cri.go:89] found id: ""
	I1207 23:33:57.837565  590594 logs.go:282] 1 containers: [f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac]
	I1207 23:33:57.837617  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:33:57.842703  590594 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:33:57.842782  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:33:57.879805  590594 cri.go:89] found id: ""
	I1207 23:33:57.879837  590594 logs.go:282] 0 containers: []
	W1207 23:33:57.879849  590594 logs.go:284] No container was found matching "coredns"
	I1207 23:33:57.879858  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:33:57.879916  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:33:57.916664  590594 cri.go:89] found id: "f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a"
	I1207 23:33:57.916685  590594 cri.go:89] found id: ""
	I1207 23:33:57.916694  590594 logs.go:282] 1 containers: [f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a]
	I1207 23:33:57.916740  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:33:57.920982  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:33:57.921052  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:33:57.957279  590594 cri.go:89] found id: ""
	I1207 23:33:57.957309  590594 logs.go:282] 0 containers: []
	W1207 23:33:57.957320  590594 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:33:57.957351  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:33:57.957412  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:33:57.998018  590594 cri.go:89] found id: "0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df"
	I1207 23:33:57.998039  590594 cri.go:89] found id: ""
	I1207 23:33:57.998047  590594 logs.go:282] 1 containers: [0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df]
	I1207 23:33:57.998094  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:33:58.002395  590594 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:33:58.002467  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:33:58.040341  590594 cri.go:89] found id: ""
	I1207 23:33:58.040375  590594 logs.go:282] 0 containers: []
	W1207 23:33:58.040387  590594 logs.go:284] No container was found matching "kindnet"
	I1207 23:33:58.040396  590594 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:33:58.040468  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:33:58.076389  590594 cri.go:89] found id: ""
	I1207 23:33:58.076416  590594 logs.go:282] 0 containers: []
	W1207 23:33:58.076425  590594 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:33:58.076440  590594 logs.go:123] Gathering logs for kube-apiserver [103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2] ...
	I1207 23:33:58.076454  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2"
	I1207 23:33:58.115505  590594 logs.go:123] Gathering logs for etcd [f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac] ...
	I1207 23:33:58.115542  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac"
	I1207 23:33:58.155833  590594 logs.go:123] Gathering logs for kube-controller-manager [0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df] ...
	I1207 23:33:58.155869  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df"
	I1207 23:33:58.191904  590594 logs.go:123] Gathering logs for kubelet ...
	I1207 23:33:58.191932  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:33:58.295247  590594 logs.go:123] Gathering logs for dmesg ...
	I1207 23:33:58.295283  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:33:58.331179  590594 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:33:58.331218  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:33:58.393322  590594 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:33:58.393364  590594 logs.go:123] Gathering logs for kube-scheduler [f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a] ...
	I1207 23:33:58.393379  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a"
	I1207 23:33:58.474254  590594 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:33:58.474288  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:33:58.531469  590594 logs.go:123] Gathering logs for container status ...
	I1207 23:33:58.531511  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:34:00.428590  638483 cli_runner.go:164] Run: docker network inspect no-preload-313006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:34:00.447511  638483 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1207 23:34:00.451945  638483 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:34:00.463051  638483 kubeadm.go:884] updating cluster {Name:no-preload-313006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-313006 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:34:00.463157  638483 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1207 23:34:00.463187  638483 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:34:00.491052  638483 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1207 23:34:00.491080  638483 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1207 23:34:00.491156  638483 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 23:34:00.491164  638483 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1207 23:34:00.491182  638483 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1207 23:34:00.491191  638483 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1207 23:34:00.491255  638483 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1207 23:34:00.491275  638483 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1207 23:34:00.491334  638483 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1207 23:34:00.491257  638483 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1207 23:34:00.492532  638483 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1207 23:34:00.492552  638483 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1207 23:34:00.492557  638483 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 23:34:00.492532  638483 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1207 23:34:00.492532  638483 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1207 23:34:00.492540  638483 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1207 23:34:00.492598  638483 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1207 23:34:00.492604  638483 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1207 23:34:00.626391  638483 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1207 23:34:00.634760  638483 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1207 23:34:00.646149  638483 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1207 23:34:00.650446  638483 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1207 23:34:00.665114  638483 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1207 23:34:00.665173  638483 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1207 23:34:00.665221  638483 ssh_runner.go:195] Run: which crictl
	I1207 23:34:00.674518  638483 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1207 23:34:00.674575  638483 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1207 23:34:00.674625  638483 ssh_runner.go:195] Run: which crictl
	I1207 23:34:00.684829  638483 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1207 23:34:00.684867  638483 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1207 23:34:00.684909  638483 ssh_runner.go:195] Run: which crictl
	I1207 23:34:00.689006  638483 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1207 23:34:00.689953  638483 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1207 23:34:00.690407  638483 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1207 23:34:00.690453  638483 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1207 23:34:00.690454  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1207 23:34:00.690532  638483 ssh_runner.go:195] Run: which crictl
	I1207 23:34:00.712976  638483 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1207 23:34:00.729518  638483 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1207 23:34:00.729581  638483 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1207 23:34:00.729630  638483 ssh_runner.go:195] Run: which crictl
	I1207 23:34:00.729688  638483 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1207 23:34:00.729629  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1207 23:34:00.729724  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1207 23:34:00.729727  638483 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1207 23:34:00.729824  638483 ssh_runner.go:195] Run: which crictl
	I1207 23:34:00.729789  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1207 23:34:00.729827  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1207 23:34:00.756849  638483 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1207 23:34:00.756896  638483 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1207 23:34:00.756938  638483 ssh_runner.go:195] Run: which crictl
	I1207 23:34:00.756940  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1207 23:34:00.762678  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1207 23:34:00.764483  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1207 23:34:00.764570  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1207 23:34:00.764608  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1207 23:34:00.764654  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1207 23:34:00.764613  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1207 23:33:59.885396  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:33:59.885824  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:33:59.885882  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:33:59.885930  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:33:59.914941  610371 cri.go:89] found id: "4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b"
	I1207 23:33:59.914970  610371 cri.go:89] found id: ""
	I1207 23:33:59.914981  610371 logs.go:282] 1 containers: [4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b]
	I1207 23:33:59.915040  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:33:59.919199  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:33:59.919269  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:33:59.948989  610371 cri.go:89] found id: ""
	I1207 23:33:59.949013  610371 logs.go:282] 0 containers: []
	W1207 23:33:59.949021  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:33:59.949027  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:33:59.949073  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:33:59.987582  610371 cri.go:89] found id: ""
	I1207 23:33:59.987610  610371 logs.go:282] 0 containers: []
	W1207 23:33:59.987620  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:33:59.987642  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:33:59.987704  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:34:00.016506  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:34:00.016536  610371 cri.go:89] found id: ""
	I1207 23:34:00.016551  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:34:00.016614  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:00.020804  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:34:00.020875  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:34:00.051947  610371 cri.go:89] found id: ""
	I1207 23:34:00.051973  610371 logs.go:282] 0 containers: []
	W1207 23:34:00.051983  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:34:00.051989  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:34:00.052038  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:34:00.081451  610371 cri.go:89] found id: "0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d"
	I1207 23:34:00.081485  610371 cri.go:89] found id: "6ea869de482b1e9d4029430cb082e602d945922717fd8de66a9407dc0cdf6778"
	I1207 23:34:00.081492  610371 cri.go:89] found id: ""
	I1207 23:34:00.081502  610371 logs.go:282] 2 containers: [0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d 6ea869de482b1e9d4029430cb082e602d945922717fd8de66a9407dc0cdf6778]
	I1207 23:34:00.081561  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:00.085722  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:00.089862  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:34:00.089931  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:34:00.118921  610371 cri.go:89] found id: ""
	I1207 23:34:00.118949  610371 logs.go:282] 0 containers: []
	W1207 23:34:00.118958  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:34:00.118965  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:34:00.119022  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:34:00.146561  610371 cri.go:89] found id: ""
	I1207 23:34:00.146592  610371 logs.go:282] 0 containers: []
	W1207 23:34:00.146604  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:34:00.146626  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:34:00.146644  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:34:00.236057  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:34:00.236102  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:34:00.273821  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:34:00.273856  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:34:00.334434  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:34:00.334462  610371 logs.go:123] Gathering logs for kube-controller-manager [0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d] ...
	I1207 23:34:00.334479  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d"
	I1207 23:34:00.362295  610371 logs.go:123] Gathering logs for kube-controller-manager [6ea869de482b1e9d4029430cb082e602d945922717fd8de66a9407dc0cdf6778] ...
	I1207 23:34:00.362322  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ea869de482b1e9d4029430cb082e602d945922717fd8de66a9407dc0cdf6778"
	I1207 23:34:00.391745  610371 logs.go:123] Gathering logs for kube-apiserver [4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b] ...
	I1207 23:34:00.391782  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b"
	I1207 23:34:00.427847  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:34:00.427876  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:34:00.458133  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:34:00.458158  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:34:00.509461  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:34:00.509497  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1207 23:33:59.526079  631235 node_ready.go:57] node "old-k8s-version-320477" has "Ready":"False" status (will retry)
	W1207 23:34:02.024825  631235 node_ready.go:57] node "old-k8s-version-320477" has "Ready":"False" status (will retry)
	I1207 23:34:01.076944  590594 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1207 23:34:01.077448  590594 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1207 23:34:01.077515  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:34:01.077586  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:34:01.136939  590594 cri.go:89] found id: "103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2"
	I1207 23:34:01.136970  590594 cri.go:89] found id: ""
	I1207 23:34:01.137009  590594 logs.go:282] 1 containers: [103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2]
	I1207 23:34:01.137100  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:34:01.143689  590594 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:34:01.143777  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:34:01.216264  590594 cri.go:89] found id: "f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac"
	I1207 23:34:01.216307  590594 cri.go:89] found id: ""
	I1207 23:34:01.216319  590594 logs.go:282] 1 containers: [f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac]
	I1207 23:34:01.216390  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:34:01.223023  590594 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:34:01.223120  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:34:01.284394  590594 cri.go:89] found id: ""
	I1207 23:34:01.284648  590594 logs.go:282] 0 containers: []
	W1207 23:34:01.284671  590594 logs.go:284] No container was found matching "coredns"
	I1207 23:34:01.284683  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:34:01.284760  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:34:01.354532  590594 cri.go:89] found id: "f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a"
	I1207 23:34:01.354647  590594 cri.go:89] found id: ""
	I1207 23:34:01.354662  590594 logs.go:282] 1 containers: [f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a]
	I1207 23:34:01.354813  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:34:01.362020  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:34:01.362101  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:34:01.424239  590594 cri.go:89] found id: ""
	I1207 23:34:01.424271  590594 logs.go:282] 0 containers: []
	W1207 23:34:01.424282  590594 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:34:01.424288  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:34:01.424458  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:34:01.462912  590594 cri.go:89] found id: "0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df"
	I1207 23:34:01.462933  590594 cri.go:89] found id: ""
	I1207 23:34:01.462941  590594 logs.go:282] 1 containers: [0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df]
	I1207 23:34:01.462986  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:34:01.466953  590594 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:34:01.467025  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:34:01.503129  590594 cri.go:89] found id: ""
	I1207 23:34:01.503160  590594 logs.go:282] 0 containers: []
	W1207 23:34:01.503169  590594 logs.go:284] No container was found matching "kindnet"
	I1207 23:34:01.503175  590594 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:34:01.503223  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:34:01.540128  590594 cri.go:89] found id: ""
	I1207 23:34:01.540157  590594 logs.go:282] 0 containers: []
	W1207 23:34:01.540166  590594 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:34:01.540182  590594 logs.go:123] Gathering logs for kube-controller-manager [0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df] ...
	I1207 23:34:01.540195  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df"
	I1207 23:34:01.575994  590594 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:34:01.576025  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:34:01.627099  590594 logs.go:123] Gathering logs for container status ...
	I1207 23:34:01.627148  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:34:01.673945  590594 logs.go:123] Gathering logs for dmesg ...
	I1207 23:34:01.673978  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:34:01.708861  590594 logs.go:123] Gathering logs for etcd [f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac] ...
	I1207 23:34:01.708895  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac"
	I1207 23:34:01.747571  590594 logs.go:123] Gathering logs for kubelet ...
	I1207 23:34:01.747603  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:34:01.853517  590594 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:34:01.853556  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:34:01.920963  590594 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:34:01.920983  590594 logs.go:123] Gathering logs for kube-apiserver [103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2] ...
	I1207 23:34:01.920997  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2"
	I1207 23:34:01.969950  590594 logs.go:123] Gathering logs for kube-scheduler [f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a] ...
	I1207 23:34:01.969982  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a"
	I1207 23:34:04.547084  590594 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1207 23:34:04.547592  590594 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1207 23:34:04.547646  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:34:04.547701  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:34:04.585517  590594 cri.go:89] found id: "103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2"
	I1207 23:34:04.585544  590594 cri.go:89] found id: ""
	I1207 23:34:04.585554  590594 logs.go:282] 1 containers: [103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2]
	I1207 23:34:04.585610  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:34:04.589638  590594 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:34:04.589711  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:34:04.625428  590594 cri.go:89] found id: "f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac"
	I1207 23:34:04.625455  590594 cri.go:89] found id: ""
	I1207 23:34:04.625466  590594 logs.go:282] 1 containers: [f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac]
	I1207 23:34:04.625521  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:34:04.629468  590594 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:34:04.629536  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:34:04.667769  590594 cri.go:89] found id: ""
	I1207 23:34:04.667797  590594 logs.go:282] 0 containers: []
	W1207 23:34:04.667808  590594 logs.go:284] No container was found matching "coredns"
	I1207 23:34:04.667817  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:34:04.667876  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:34:04.710406  590594 cri.go:89] found id: "f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a"
	I1207 23:34:04.710435  590594 cri.go:89] found id: ""
	I1207 23:34:04.710447  590594 logs.go:282] 1 containers: [f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a]
	I1207 23:34:04.710508  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:34:04.714609  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:34:04.714672  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:34:04.751096  590594 cri.go:89] found id: ""
	I1207 23:34:04.751125  590594 logs.go:282] 0 containers: []
	W1207 23:34:04.751135  590594 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:34:04.751144  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:34:04.751198  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:34:04.791311  590594 cri.go:89] found id: "0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df"
	I1207 23:34:04.791350  590594 cri.go:89] found id: ""
	I1207 23:34:04.791362  590594 logs.go:282] 1 containers: [0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df]
	I1207 23:34:04.791421  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:34:04.795997  590594 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:34:04.796068  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:34:04.835464  590594 cri.go:89] found id: ""
	I1207 23:34:04.835502  590594 logs.go:282] 0 containers: []
	W1207 23:34:04.835514  590594 logs.go:284] No container was found matching "kindnet"
	I1207 23:34:04.835531  590594 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:34:04.835591  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:34:04.877702  590594 cri.go:89] found id: ""
	I1207 23:34:04.877732  590594 logs.go:282] 0 containers: []
	W1207 23:34:04.877743  590594 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:34:04.877764  590594 logs.go:123] Gathering logs for kube-apiserver [103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2] ...
	I1207 23:34:04.877781  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2"
	I1207 23:34:04.925886  590594 logs.go:123] Gathering logs for kube-controller-manager [0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df] ...
	I1207 23:34:04.925922  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df"
	I1207 23:34:04.965096  590594 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:34:04.965134  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:34:05.016889  590594 logs.go:123] Gathering logs for container status ...
	I1207 23:34:05.016936  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:34:05.066515  590594 logs.go:123] Gathering logs for kubelet ...
	I1207 23:34:05.066550  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:34:05.167467  590594 logs.go:123] Gathering logs for dmesg ...
	I1207 23:34:05.167506  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:34:05.213171  590594 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:34:05.213213  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:34:05.290174  590594 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:34:05.290197  590594 logs.go:123] Gathering logs for etcd [f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac] ...
	I1207 23:34:05.290220  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac"
	I1207 23:34:05.326616  590594 logs.go:123] Gathering logs for kube-scheduler [f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a] ...
	I1207 23:34:05.326656  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a"
	I1207 23:34:00.793165  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1207 23:34:00.799367  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1207 23:34:00.804317  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1207 23:34:00.804340  638483 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1207 23:34:00.804476  638483 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1207 23:34:00.804796  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1207 23:34:00.834578  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1207 23:34:00.834615  638483 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1207 23:34:00.834627  638483 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1207 23:34:00.834634  638483 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1207 23:34:00.834638  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1207 23:34:00.834646  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1207 23:34:00.834580  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1207 23:34:00.834689  638483 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1207 23:34:00.834716  638483 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1207 23:34:00.835256  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1207 23:34:00.893211  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1207 23:34:00.893251  638483 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1207 23:34:00.893262  638483 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1207 23:34:00.893284  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1207 23:34:00.893221  638483 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1207 23:34:00.893303  638483 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1207 23:34:00.893339  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1207 23:34:00.893379  638483 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1207 23:34:00.893392  638483 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1207 23:34:00.893428  638483 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1207 23:34:00.893515  638483 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	W1207 23:34:00.916081  638483 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1207 23:34:00.916164  638483 retry.go:31] will retry after 149.570723ms: ssh: rejected: connect failed (open failed)
	I1207 23:34:00.967447  638483 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1207 23:34:00.967489  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1207 23:34:00.967531  638483 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1207 23:34:00.967559  638483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:34:00.967574  638483 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1207 23:34:00.967608  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1207 23:34:00.967638  638483 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1207 23:34:00.967654  638483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:34:00.967709  638483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:34:00.991182  638483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:34:00.994312  638483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:34:00.997221  638483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:34:01.070954  638483 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1207 23:34:01.070994  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1207 23:34:01.071410  638483 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1207 23:34:01.071439  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1207 23:34:01.152724  638483 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1207 23:34:01.152809  638483 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1207 23:34:01.697113  638483 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1207 23:34:01.697166  638483 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1207 23:34:01.697212  638483 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1207 23:34:01.882050  638483 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 23:34:02.789753  638483 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.09251657s)
	I1207 23:34:02.789783  638483 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1207 23:34:02.789813  638483 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1207 23:34:02.789860  638483 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1207 23:34:02.789866  638483 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1207 23:34:02.789912  638483 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 23:34:02.789960  638483 ssh_runner.go:195] Run: which crictl
	I1207 23:34:04.124659  638483 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (1.334769675s)
	I1207 23:34:04.124690  638483 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1207 23:34:04.124719  638483 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1207 23:34:04.124728  638483 ssh_runner.go:235] Completed: which crictl: (1.334743598s)
	I1207 23:34:04.124765  638483 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1207 23:34:04.124795  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 23:34:05.597086  638483 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.472296234s)
	I1207 23:34:05.597117  638483 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1207 23:34:05.597147  638483 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1207 23:34:05.597177  638483 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.472360836s)
	I1207 23:34:05.597205  638483 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1207 23:34:05.597235  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 23:34:03.041935  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:34:03.042405  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:34:03.042466  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:34:03.042533  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:34:03.070466  610371 cri.go:89] found id: "4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b"
	I1207 23:34:03.070490  610371 cri.go:89] found id: ""
	I1207 23:34:03.070500  610371 logs.go:282] 1 containers: [4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b]
	I1207 23:34:03.070566  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:03.074839  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:34:03.074915  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:34:03.104762  610371 cri.go:89] found id: ""
	I1207 23:34:03.104791  610371 logs.go:282] 0 containers: []
	W1207 23:34:03.104802  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:34:03.104811  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:34:03.104871  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:34:03.132806  610371 cri.go:89] found id: ""
	I1207 23:34:03.132837  610371 logs.go:282] 0 containers: []
	W1207 23:34:03.132847  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:34:03.132853  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:34:03.132910  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:34:03.161872  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:34:03.161899  610371 cri.go:89] found id: ""
	I1207 23:34:03.161908  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:34:03.161969  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:03.166437  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:34:03.166495  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:34:03.194598  610371 cri.go:89] found id: ""
	I1207 23:34:03.194627  610371 logs.go:282] 0 containers: []
	W1207 23:34:03.194639  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:34:03.194648  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:34:03.194714  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:34:03.225510  610371 cri.go:89] found id: "0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d"
	I1207 23:34:03.225537  610371 cri.go:89] found id: "6ea869de482b1e9d4029430cb082e602d945922717fd8de66a9407dc0cdf6778"
	I1207 23:34:03.225543  610371 cri.go:89] found id: ""
	I1207 23:34:03.225554  610371 logs.go:282] 2 containers: [0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d 6ea869de482b1e9d4029430cb082e602d945922717fd8de66a9407dc0cdf6778]
	I1207 23:34:03.225619  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:03.229857  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:03.233711  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:34:03.233788  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:34:03.266550  610371 cri.go:89] found id: ""
	I1207 23:34:03.266580  610371 logs.go:282] 0 containers: []
	W1207 23:34:03.266592  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:34:03.266600  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:34:03.266661  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:34:03.317075  610371 cri.go:89] found id: ""
	I1207 23:34:03.317187  610371 logs.go:282] 0 containers: []
	W1207 23:34:03.317211  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:34:03.317260  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:34:03.317382  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:34:03.363354  610371 logs.go:123] Gathering logs for kube-apiserver [4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b] ...
	I1207 23:34:03.363390  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b"
	I1207 23:34:03.410993  610371 logs.go:123] Gathering logs for kube-controller-manager [0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d] ...
	I1207 23:34:03.411030  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d"
	I1207 23:34:03.449064  610371 logs.go:123] Gathering logs for kube-controller-manager [6ea869de482b1e9d4029430cb082e602d945922717fd8de66a9407dc0cdf6778] ...
	I1207 23:34:03.449442  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ea869de482b1e9d4029430cb082e602d945922717fd8de66a9407dc0cdf6778"
	I1207 23:34:03.486497  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:34:03.486543  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:34:03.585207  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:34:03.585244  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:34:03.617311  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:34:03.617356  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:34:03.683951  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:34:03.683972  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:34:03.683985  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:34:03.714670  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:34:03.714698  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:34:06.269414  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:34:06.269879  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:34:06.269932  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:34:06.269992  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:34:06.300140  610371 cri.go:89] found id: "4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b"
	I1207 23:34:06.300164  610371 cri.go:89] found id: ""
	I1207 23:34:06.300173  610371 logs.go:282] 1 containers: [4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b]
	I1207 23:34:06.300241  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:06.304438  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:34:06.304509  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:34:06.340033  610371 cri.go:89] found id: ""
	I1207 23:34:06.340071  610371 logs.go:282] 0 containers: []
	W1207 23:34:06.340082  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:34:06.340091  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:34:06.340165  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:34:06.373264  610371 cri.go:89] found id: ""
	I1207 23:34:06.373291  610371 logs.go:282] 0 containers: []
	W1207 23:34:06.373302  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:34:06.373311  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:34:06.373385  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:34:06.404114  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:34:06.404136  610371 cri.go:89] found id: ""
	I1207 23:34:06.404145  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:34:06.404204  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:06.408746  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:34:06.408815  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:34:06.436924  610371 cri.go:89] found id: ""
	I1207 23:34:06.436958  610371 logs.go:282] 0 containers: []
	W1207 23:34:06.436969  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:34:06.436977  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:34:06.437033  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:34:06.465713  610371 cri.go:89] found id: "0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d"
	I1207 23:34:06.465740  610371 cri.go:89] found id: "6ea869de482b1e9d4029430cb082e602d945922717fd8de66a9407dc0cdf6778"
	I1207 23:34:06.465782  610371 cri.go:89] found id: ""
	I1207 23:34:06.465793  610371 logs.go:282] 2 containers: [0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d 6ea869de482b1e9d4029430cb082e602d945922717fd8de66a9407dc0cdf6778]
	I1207 23:34:06.465871  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:06.470717  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:06.475192  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:34:06.475270  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:34:06.506209  610371 cri.go:89] found id: ""
	I1207 23:34:06.506234  610371 logs.go:282] 0 containers: []
	W1207 23:34:06.506242  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:34:06.506248  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:34:06.506310  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:34:06.536963  610371 cri.go:89] found id: ""
	I1207 23:34:06.536988  610371 logs.go:282] 0 containers: []
	W1207 23:34:06.536995  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:34:06.537009  610371 logs.go:123] Gathering logs for kube-apiserver [4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b] ...
	I1207 23:34:06.537025  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b"
	I1207 23:34:06.573745  610371 logs.go:123] Gathering logs for kube-controller-manager [0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d] ...
	I1207 23:34:06.573781  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d"
	I1207 23:34:06.602477  610371 logs.go:123] Gathering logs for kube-controller-manager [6ea869de482b1e9d4029430cb082e602d945922717fd8de66a9407dc0cdf6778] ...
	I1207 23:34:06.602505  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6ea869de482b1e9d4029430cb082e602d945922717fd8de66a9407dc0cdf6778"
	I1207 23:34:06.631489  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:34:06.631523  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:34:06.675075  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:34:06.675114  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:34:06.720970  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:34:06.721019  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:34:06.759083  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:34:06.759122  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:34:06.810389  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:34:06.810427  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:34:06.894390  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:34:06.894426  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:34:06.954155  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1207 23:34:04.025292  631235 node_ready.go:57] node "old-k8s-version-320477" has "Ready":"False" status (will retry)
	I1207 23:34:06.525610  631235 node_ready.go:49] node "old-k8s-version-320477" is "Ready"
	I1207 23:34:06.525657  631235 node_ready.go:38] duration metric: took 13.504095207s for node "old-k8s-version-320477" to be "Ready" ...
	I1207 23:34:06.525678  631235 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:34:06.525738  631235 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:34:06.540499  631235 api_server.go:72] duration metric: took 13.892295153s to wait for apiserver process to appear ...
	I1207 23:34:06.540528  631235 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:34:06.540553  631235 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1207 23:34:06.545914  631235 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1207 23:34:06.547609  631235 api_server.go:141] control plane version: v1.28.0
	I1207 23:34:06.547639  631235 api_server.go:131] duration metric: took 7.102497ms to wait for apiserver health ...
	I1207 23:34:06.547651  631235 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:34:06.551896  631235 system_pods.go:59] 8 kube-system pods found
	I1207 23:34:06.551929  631235 system_pods.go:61] "coredns-5dd5756b68-vv8vq" [36c9ee97-e1e3-4323-a423-698ebc1b76e5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:34:06.551937  631235 system_pods.go:61] "etcd-old-k8s-version-320477" [c365cedf-5fec-4c12-ae35-db15b8325689] Running
	I1207 23:34:06.551945  631235 system_pods.go:61] "kindnet-gnv88" [90472b53-7730-44fa-80cc-96a20875ede5] Running
	I1207 23:34:06.551951  631235 system_pods.go:61] "kube-apiserver-old-k8s-version-320477" [e4c19df0-a55c-4f0c-9a9e-988040f3776b] Running
	I1207 23:34:06.551958  631235 system_pods.go:61] "kube-controller-manager-old-k8s-version-320477" [3d062cd0-626c-49b8-ac04-b2e49d6a1a68] Running
	I1207 23:34:06.551964  631235 system_pods.go:61] "kube-proxy-vlx4n" [cee2f481-4ff2-4dc0-acf0-40f24977a61c] Running
	I1207 23:34:06.551970  631235 system_pods.go:61] "kube-scheduler-old-k8s-version-320477" [ed23677b-742c-4021-955d-8672763acb44] Running
	I1207 23:34:06.551977  631235 system_pods.go:61] "storage-provisioner" [3252d094-8849-4585-9065-1f6e312af8cd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:34:06.551986  631235 system_pods.go:74] duration metric: took 4.32748ms to wait for pod list to return data ...
	I1207 23:34:06.551999  631235 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:34:06.554502  631235 default_sa.go:45] found service account: "default"
	I1207 23:34:06.554526  631235 default_sa.go:55] duration metric: took 2.516661ms for default service account to be created ...
	I1207 23:34:06.554537  631235 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 23:34:06.558382  631235 system_pods.go:86] 8 kube-system pods found
	I1207 23:34:06.558419  631235 system_pods.go:89] "coredns-5dd5756b68-vv8vq" [36c9ee97-e1e3-4323-a423-698ebc1b76e5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:34:06.558428  631235 system_pods.go:89] "etcd-old-k8s-version-320477" [c365cedf-5fec-4c12-ae35-db15b8325689] Running
	I1207 23:34:06.558437  631235 system_pods.go:89] "kindnet-gnv88" [90472b53-7730-44fa-80cc-96a20875ede5] Running
	I1207 23:34:06.558443  631235 system_pods.go:89] "kube-apiserver-old-k8s-version-320477" [e4c19df0-a55c-4f0c-9a9e-988040f3776b] Running
	I1207 23:34:06.558453  631235 system_pods.go:89] "kube-controller-manager-old-k8s-version-320477" [3d062cd0-626c-49b8-ac04-b2e49d6a1a68] Running
	I1207 23:34:06.558459  631235 system_pods.go:89] "kube-proxy-vlx4n" [cee2f481-4ff2-4dc0-acf0-40f24977a61c] Running
	I1207 23:34:06.558467  631235 system_pods.go:89] "kube-scheduler-old-k8s-version-320477" [ed23677b-742c-4021-955d-8672763acb44] Running
	I1207 23:34:06.558475  631235 system_pods.go:89] "storage-provisioner" [3252d094-8849-4585-9065-1f6e312af8cd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:34:06.558507  631235 retry.go:31] will retry after 223.054699ms: missing components: kube-dns
	I1207 23:34:06.815358  631235 system_pods.go:86] 8 kube-system pods found
	I1207 23:34:06.815405  631235 system_pods.go:89] "coredns-5dd5756b68-vv8vq" [36c9ee97-e1e3-4323-a423-698ebc1b76e5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:34:06.815414  631235 system_pods.go:89] "etcd-old-k8s-version-320477" [c365cedf-5fec-4c12-ae35-db15b8325689] Running
	I1207 23:34:06.815423  631235 system_pods.go:89] "kindnet-gnv88" [90472b53-7730-44fa-80cc-96a20875ede5] Running
	I1207 23:34:06.815430  631235 system_pods.go:89] "kube-apiserver-old-k8s-version-320477" [e4c19df0-a55c-4f0c-9a9e-988040f3776b] Running
	I1207 23:34:06.815436  631235 system_pods.go:89] "kube-controller-manager-old-k8s-version-320477" [3d062cd0-626c-49b8-ac04-b2e49d6a1a68] Running
	I1207 23:34:06.815441  631235 system_pods.go:89] "kube-proxy-vlx4n" [cee2f481-4ff2-4dc0-acf0-40f24977a61c] Running
	I1207 23:34:06.815446  631235 system_pods.go:89] "kube-scheduler-old-k8s-version-320477" [ed23677b-742c-4021-955d-8672763acb44] Running
	I1207 23:34:06.815455  631235 system_pods.go:89] "storage-provisioner" [3252d094-8849-4585-9065-1f6e312af8cd] Running
	I1207 23:34:06.815477  631235 retry.go:31] will retry after 318.112938ms: missing components: kube-dns
	I1207 23:34:07.137754  631235 system_pods.go:86] 8 kube-system pods found
	I1207 23:34:07.137800  631235 system_pods.go:89] "coredns-5dd5756b68-vv8vq" [36c9ee97-e1e3-4323-a423-698ebc1b76e5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:34:07.137810  631235 system_pods.go:89] "etcd-old-k8s-version-320477" [c365cedf-5fec-4c12-ae35-db15b8325689] Running
	I1207 23:34:07.137817  631235 system_pods.go:89] "kindnet-gnv88" [90472b53-7730-44fa-80cc-96a20875ede5] Running
	I1207 23:34:07.137822  631235 system_pods.go:89] "kube-apiserver-old-k8s-version-320477" [e4c19df0-a55c-4f0c-9a9e-988040f3776b] Running
	I1207 23:34:07.137828  631235 system_pods.go:89] "kube-controller-manager-old-k8s-version-320477" [3d062cd0-626c-49b8-ac04-b2e49d6a1a68] Running
	I1207 23:34:07.137833  631235 system_pods.go:89] "kube-proxy-vlx4n" [cee2f481-4ff2-4dc0-acf0-40f24977a61c] Running
	I1207 23:34:07.137839  631235 system_pods.go:89] "kube-scheduler-old-k8s-version-320477" [ed23677b-742c-4021-955d-8672763acb44] Running
	I1207 23:34:07.137845  631235 system_pods.go:89] "storage-provisioner" [3252d094-8849-4585-9065-1f6e312af8cd] Running
	I1207 23:34:07.137857  631235 system_pods.go:126] duration metric: took 583.311636ms to wait for k8s-apps to be running ...
	I1207 23:34:07.137870  631235 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:34:07.137921  631235 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:34:07.150864  631235 system_svc.go:56] duration metric: took 12.983169ms WaitForService to wait for kubelet
	I1207 23:34:07.150902  631235 kubeadm.go:587] duration metric: took 14.502703777s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:34:07.150929  631235 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:34:07.153584  631235 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:34:07.153610  631235 node_conditions.go:123] node cpu capacity is 8
	I1207 23:34:07.153624  631235 node_conditions.go:105] duration metric: took 2.689653ms to run NodePressure ...
	I1207 23:34:07.153637  631235 start.go:242] waiting for startup goroutines ...
	I1207 23:34:07.153644  631235 start.go:247] waiting for cluster config update ...
	I1207 23:34:07.153654  631235 start.go:256] writing updated cluster config ...
	I1207 23:34:07.153899  631235 ssh_runner.go:195] Run: rm -f paused
	I1207 23:34:07.158049  631235 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:34:07.162851  631235 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vv8vq" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:08.169375  631235 pod_ready.go:94] pod "coredns-5dd5756b68-vv8vq" is "Ready"
	I1207 23:34:08.169410  631235 pod_ready.go:86] duration metric: took 1.006523158s for pod "coredns-5dd5756b68-vv8vq" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:08.172572  631235 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:08.177264  631235 pod_ready.go:94] pod "etcd-old-k8s-version-320477" is "Ready"
	I1207 23:34:08.177300  631235 pod_ready.go:86] duration metric: took 4.700832ms for pod "etcd-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:08.180602  631235 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:08.185796  631235 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-320477" is "Ready"
	I1207 23:34:08.185853  631235 pod_ready.go:86] duration metric: took 5.222231ms for pod "kube-apiserver-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:08.188923  631235 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:08.366363  631235 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-320477" is "Ready"
	I1207 23:34:08.366399  631235 pod_ready.go:86] duration metric: took 177.446759ms for pod "kube-controller-manager-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:08.568784  631235 pod_ready.go:83] waiting for pod "kube-proxy-vlx4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:08.966482  631235 pod_ready.go:94] pod "kube-proxy-vlx4n" is "Ready"
	I1207 23:34:08.966511  631235 pod_ready.go:86] duration metric: took 397.702952ms for pod "kube-proxy-vlx4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:09.167404  631235 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:09.567645  631235 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-320477" is "Ready"
	I1207 23:34:09.567674  631235 pod_ready.go:86] duration metric: took 400.243091ms for pod "kube-scheduler-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:09.567687  631235 pod_ready.go:40] duration metric: took 2.409595782s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:34:09.626637  631235 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1207 23:34:09.628466  631235 out.go:203] 
	W1207 23:34:09.629775  631235 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1207 23:34:09.631048  631235 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1207 23:34:09.633120  631235 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-320477" cluster and "default" namespace by default
	I1207 23:34:07.915409  590594 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1207 23:34:07.915934  590594 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1207 23:34:07.916002  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:34:07.916070  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:34:07.961884  590594 cri.go:89] found id: "103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2"
	I1207 23:34:07.961911  590594 cri.go:89] found id: ""
	I1207 23:34:07.961922  590594 logs.go:282] 1 containers: [103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2]
	I1207 23:34:07.961990  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:34:07.967232  590594 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:34:07.967304  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:34:08.009524  590594 cri.go:89] found id: "f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac"
	I1207 23:34:08.009551  590594 cri.go:89] found id: ""
	I1207 23:34:08.009562  590594 logs.go:282] 1 containers: [f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac]
	I1207 23:34:08.009623  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:34:08.014791  590594 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:34:08.014859  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:34:08.062318  590594 cri.go:89] found id: ""
	I1207 23:34:08.062377  590594 logs.go:282] 0 containers: []
	W1207 23:34:08.062389  590594 logs.go:284] No container was found matching "coredns"
	I1207 23:34:08.062398  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:34:08.062474  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:34:08.103083  590594 cri.go:89] found id: "f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a"
	I1207 23:34:08.103112  590594 cri.go:89] found id: ""
	I1207 23:34:08.103123  590594 logs.go:282] 1 containers: [f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a]
	I1207 23:34:08.103182  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:34:08.107501  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:34:08.107569  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:34:08.148654  590594 cri.go:89] found id: ""
	I1207 23:34:08.148685  590594 logs.go:282] 0 containers: []
	W1207 23:34:08.148698  590594 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:34:08.148707  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:34:08.148771  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:34:08.198814  590594 cri.go:89] found id: "0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df"
	I1207 23:34:08.198838  590594 cri.go:89] found id: ""
	I1207 23:34:08.198850  590594 logs.go:282] 1 containers: [0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df]
	I1207 23:34:08.198911  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:34:08.204001  590594 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:34:08.204088  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:34:08.242321  590594 cri.go:89] found id: ""
	I1207 23:34:08.242370  590594 logs.go:282] 0 containers: []
	W1207 23:34:08.242381  590594 logs.go:284] No container was found matching "kindnet"
	I1207 23:34:08.242390  590594 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:34:08.242446  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:34:08.283413  590594 cri.go:89] found id: ""
	I1207 23:34:08.283450  590594 logs.go:282] 0 containers: []
	W1207 23:34:08.283461  590594 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:34:08.283478  590594 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:34:08.283504  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:34:08.340512  590594 logs.go:123] Gathering logs for container status ...
	I1207 23:34:08.340561  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:34:08.382162  590594 logs.go:123] Gathering logs for dmesg ...
	I1207 23:34:08.382189  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:34:08.417335  590594 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:34:08.417378  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:34:08.479296  590594 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:34:08.479337  590594 logs.go:123] Gathering logs for kube-apiserver [103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2] ...
	I1207 23:34:08.479355  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2"
	I1207 23:34:08.518235  590594 logs.go:123] Gathering logs for kube-controller-manager [0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df] ...
	I1207 23:34:08.518267  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df"
	I1207 23:34:08.554954  590594 logs.go:123] Gathering logs for kubelet ...
	I1207 23:34:08.554990  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:34:08.662831  590594 logs.go:123] Gathering logs for etcd [f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac] ...
	I1207 23:34:08.662873  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac"
	I1207 23:34:08.699141  590594 logs.go:123] Gathering logs for kube-scheduler [f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a] ...
	I1207 23:34:08.699179  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a"
	I1207 23:34:07.105578  638483 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.508313534s)
	I1207 23:34:07.105656  638483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 23:34:07.105717  638483 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.50848477s)
	I1207 23:34:07.105750  638483 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1207 23:34:07.105789  638483 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1207 23:34:07.105843  638483 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1207 23:34:07.133344  638483 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1207 23:34:07.133452  638483 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1207 23:34:08.250655  638483 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.144783963s)
	I1207 23:34:08.250693  638483 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1207 23:34:08.250722  638483 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1207 23:34:08.250783  638483 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1207 23:34:08.250723  638483 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.117247791s)
	I1207 23:34:08.250859  638483 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1207 23:34:08.250887  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1207 23:34:09.492613  638483 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.241804158s)
	I1207 23:34:09.492639  638483 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1207 23:34:09.492670  638483 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1207 23:34:09.492715  638483 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1207 23:34:10.108964  638483 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1207 23:34:10.109018  638483 cache_images.go:125] Successfully loaded all cached images
	I1207 23:34:10.109026  638483 cache_images.go:94] duration metric: took 9.617924102s to LoadCachedImages
	I1207 23:34:10.109043  638483 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1207 23:34:10.109232  638483 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-313006 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-313006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:34:10.109322  638483 ssh_runner.go:195] Run: crio config
	I1207 23:34:10.160982  638483 cni.go:84] Creating CNI manager for ""
	I1207 23:34:10.161006  638483 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:34:10.161026  638483 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 23:34:10.161054  638483 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-313006 NodeName:no-preload-313006 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:34:10.161193  638483 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-313006"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:34:10.161284  638483 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1207 23:34:10.169963  638483 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1207 23:34:10.170018  638483 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1207 23:34:10.177893  638483 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1207 23:34:10.177944  638483 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm
	I1207 23:34:10.177975  638483 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet
	I1207 23:34:10.177992  638483 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1207 23:34:10.182003  638483 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1207 23:34:10.182031  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1207 23:34:09.455259  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:34:09.455685  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:34:09.455754  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:34:09.455816  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:34:09.484316  610371 cri.go:89] found id: "4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b"
	I1207 23:34:09.484356  610371 cri.go:89] found id: ""
	I1207 23:34:09.484365  610371 logs.go:282] 1 containers: [4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b]
	I1207 23:34:09.484425  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:09.488514  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:34:09.488587  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:34:09.516765  610371 cri.go:89] found id: ""
	I1207 23:34:09.516799  610371 logs.go:282] 0 containers: []
	W1207 23:34:09.516810  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:34:09.516819  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:34:09.516892  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:34:09.545436  610371 cri.go:89] found id: ""
	I1207 23:34:09.545464  610371 logs.go:282] 0 containers: []
	W1207 23:34:09.545473  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:34:09.545480  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:34:09.545537  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:34:09.573783  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:34:09.573819  610371 cri.go:89] found id: ""
	I1207 23:34:09.573830  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:34:09.573879  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:09.578111  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:34:09.578195  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:34:09.607436  610371 cri.go:89] found id: ""
	I1207 23:34:09.607466  610371 logs.go:282] 0 containers: []
	W1207 23:34:09.607477  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:34:09.607486  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:34:09.607562  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:34:09.636281  610371 cri.go:89] found id: "0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d"
	I1207 23:34:09.636300  610371 cri.go:89] found id: ""
	I1207 23:34:09.636310  610371 logs.go:282] 1 containers: [0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d]
	I1207 23:34:09.636389  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:09.640449  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:34:09.640518  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:34:09.679266  610371 cri.go:89] found id: ""
	I1207 23:34:09.679296  610371 logs.go:282] 0 containers: []
	W1207 23:34:09.679313  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:34:09.679321  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:34:09.679397  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:34:09.709559  610371 cri.go:89] found id: ""
	I1207 23:34:09.709591  610371 logs.go:282] 0 containers: []
	W1207 23:34:09.709604  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:34:09.709618  610371 logs.go:123] Gathering logs for kube-apiserver [4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b] ...
	I1207 23:34:09.709636  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b"
	I1207 23:34:09.747319  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:34:09.747407  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:34:09.781660  610371 logs.go:123] Gathering logs for kube-controller-manager [0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d] ...
	I1207 23:34:09.781693  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d"
	I1207 23:34:09.818248  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:34:09.818279  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:34:09.873425  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:34:09.873470  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:34:09.907520  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:34:09.907553  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:34:10.009842  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:34:10.009886  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:34:10.052488  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:34:10.052525  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:34:10.115930  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:34:12.617390  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:34:12.617782  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:34:12.617838  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:34:12.617894  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:34:12.646879  610371 cri.go:89] found id: "4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b"
	I1207 23:34:12.646908  610371 cri.go:89] found id: ""
	I1207 23:34:12.646922  610371 logs.go:282] 1 containers: [4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b]
	I1207 23:34:12.646986  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:12.652090  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:34:12.652160  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:34:12.680146  610371 cri.go:89] found id: ""
	I1207 23:34:12.680180  610371 logs.go:282] 0 containers: []
	W1207 23:34:12.680193  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:34:12.680201  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:34:12.680263  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:34:12.712090  610371 cri.go:89] found id: ""
	I1207 23:34:12.712119  610371 logs.go:282] 0 containers: []
	W1207 23:34:12.712131  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:34:12.712140  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:34:12.712200  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:34:11.097212  638483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:34:11.110569  638483 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1207 23:34:11.114445  638483 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1207 23:34:11.114475  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1207 23:34:11.256830  638483 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1207 23:34:11.260883  638483 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1207 23:34:11.260915  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1207 23:34:11.450291  638483 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 23:34:11.459141  638483 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1207 23:34:11.473598  638483 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1207 23:34:11.604490  638483 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1207 23:34:11.619238  638483 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1207 23:34:11.623630  638483 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:34:11.636835  638483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:34:11.726214  638483 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:34:11.749037  638483 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006 for IP: 192.168.85.2
	I1207 23:34:11.749064  638483 certs.go:195] generating shared ca certs ...
	I1207 23:34:11.749084  638483 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:34:11.749257  638483 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:34:11.749311  638483 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:34:11.749346  638483 certs.go:257] generating profile certs ...
	I1207 23:34:11.749427  638483 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/client.key
	I1207 23:34:11.749449  638483 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/client.crt with IP's: []
	I1207 23:34:11.842973  638483 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/client.crt ...
	I1207 23:34:11.843009  638483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/client.crt: {Name:mk0c4bbfb33f4b0764db72ef057f2c16ebe30e18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:34:11.843212  638483 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/client.key ...
	I1207 23:34:11.843224  638483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/client.key: {Name:mkb0c045bf6cba965b36103e7d5b7eb72db8e935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:34:11.843302  638483 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/apiserver.key.717a55f9
	I1207 23:34:11.843318  638483 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/apiserver.crt.717a55f9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1207 23:34:11.922389  638483 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/apiserver.crt.717a55f9 ...
	I1207 23:34:11.922419  638483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/apiserver.crt.717a55f9: {Name:mk1196b58def6aa10a38d6e440cb82dece70c456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:34:11.922588  638483 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/apiserver.key.717a55f9 ...
	I1207 23:34:11.922604  638483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/apiserver.key.717a55f9: {Name:mk6d515842b9e9a20e77631a46ce37b7c6d327cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:34:11.922680  638483 certs.go:382] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/apiserver.crt.717a55f9 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/apiserver.crt
	I1207 23:34:11.922754  638483 certs.go:386] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/apiserver.key.717a55f9 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/apiserver.key
	I1207 23:34:11.922812  638483 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/proxy-client.key
	I1207 23:34:11.922827  638483 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/proxy-client.crt with IP's: []
	I1207 23:34:12.008365  638483 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/proxy-client.crt ...
	I1207 23:34:12.008400  638483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/proxy-client.crt: {Name:mkad9746d16a75bdefacbb6840ca7bf627efcf4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:34:12.008607  638483 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/proxy-client.key ...
	I1207 23:34:12.008634  638483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/proxy-client.key: {Name:mk0ae89915a2df806e1dc1a3f7457dccebc9e275 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:34:12.008854  638483 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:34:12.008908  638483 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:34:12.008923  638483 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:34:12.008960  638483 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:34:12.009001  638483 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:34:12.009040  638483 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:34:12.009106  638483 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:34:12.009725  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:34:12.029881  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:34:12.048857  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:34:12.069949  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:34:12.096318  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1207 23:34:12.117360  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 23:34:12.139231  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:34:12.160641  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 23:34:12.183754  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:34:12.205117  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:34:12.224516  638483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:34:12.243853  638483 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:34:12.257704  638483 ssh_runner.go:195] Run: openssl version
	I1207 23:34:12.264678  638483 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:34:12.273229  638483 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:34:12.280975  638483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:34:12.284946  638483 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:34:12.285025  638483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:34:12.320095  638483 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:34:12.328003  638483 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3931252.pem /etc/ssl/certs/3ec20f2e.0
	I1207 23:34:12.335525  638483 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:34:12.342874  638483 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:34:12.350638  638483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:34:12.354610  638483 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:34:12.354676  638483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:34:12.390143  638483 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:34:12.398414  638483 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1207 23:34:12.406373  638483 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:34:12.413709  638483 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:34:12.421074  638483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:34:12.424854  638483 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:34:12.424910  638483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:34:12.459801  638483 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:34:12.467678  638483 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/393125.pem /etc/ssl/certs/51391683.0
	I1207 23:34:12.475150  638483 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:34:12.479152  638483 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1207 23:34:12.479218  638483 kubeadm.go:401] StartCluster: {Name:no-preload-313006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-313006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:34:12.479305  638483 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:34:12.479362  638483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:34:12.507243  638483 cri.go:89] found id: ""
	I1207 23:34:12.507305  638483 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:34:12.515440  638483 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 23:34:12.524218  638483 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1207 23:34:12.524281  638483 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 23:34:12.533316  638483 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 23:34:12.533368  638483 kubeadm.go:158] found existing configuration files:
	
	I1207 23:34:12.533419  638483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1207 23:34:12.541967  638483 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1207 23:34:12.542032  638483 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1207 23:34:12.550215  638483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1207 23:34:12.558132  638483 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1207 23:34:12.558188  638483 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1207 23:34:12.565629  638483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1207 23:34:12.573259  638483 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1207 23:34:12.573352  638483 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1207 23:34:12.580766  638483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1207 23:34:12.588192  638483 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1207 23:34:12.588241  638483 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1207 23:34:12.595932  638483 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1207 23:34:12.631816  638483 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1207 23:34:12.631923  638483 kubeadm.go:319] [preflight] Running pre-flight checks
	I1207 23:34:12.702585  638483 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1207 23:34:12.702684  638483 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1207 23:34:12.702735  638483 kubeadm.go:319] OS: Linux
	I1207 23:34:12.702795  638483 kubeadm.go:319] CGROUPS_CPU: enabled
	I1207 23:34:12.702859  638483 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1207 23:34:12.702923  638483 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1207 23:34:12.702988  638483 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1207 23:34:12.703053  638483 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1207 23:34:12.703114  638483 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1207 23:34:12.703791  638483 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1207 23:34:12.703865  638483 kubeadm.go:319] CGROUPS_IO: enabled
	I1207 23:34:12.768954  638483 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 23:34:12.769126  638483 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 23:34:12.769278  638483 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1207 23:34:12.786711  638483 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 23:34:12.789028  638483 out.go:252]   - Generating certificates and keys ...
	I1207 23:34:12.789147  638483 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1207 23:34:12.789243  638483 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1207 23:34:12.820112  638483 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 23:34:12.873708  638483 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1207 23:34:12.908545  638483 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1207 23:34:13.089228  638483 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1207 23:34:13.145424  638483 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1207 23:34:13.145558  638483 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-313006] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1207 23:34:13.268512  638483 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1207 23:34:13.268736  638483 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-313006] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1207 23:34:13.325705  638483 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 23:34:13.399281  638483 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 23:34:13.453188  638483 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1207 23:34:13.453267  638483 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 23:34:13.515878  638483 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 23:34:13.647081  638483 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 23:34:13.775919  638483 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 23:34:13.837650  638483 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 23:34:14.022570  638483 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 23:34:14.023030  638483 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 23:34:14.026916  638483 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 23:34:11.282673  590594 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1207 23:34:11.283248  590594 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1207 23:34:11.283353  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:34:11.283426  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:34:11.329521  590594 cri.go:89] found id: "103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2"
	I1207 23:34:11.329547  590594 cri.go:89] found id: ""
	I1207 23:34:11.329558  590594 logs.go:282] 1 containers: [103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2]
	I1207 23:34:11.329618  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:34:11.333762  590594 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:34:11.333849  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:34:11.385049  590594 cri.go:89] found id: "f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac"
	I1207 23:34:11.385068  590594 cri.go:89] found id: ""
	I1207 23:34:11.385076  590594 logs.go:282] 1 containers: [f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac]
	I1207 23:34:11.385135  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:34:11.389872  590594 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:34:11.389943  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:34:11.438704  590594 cri.go:89] found id: ""
	I1207 23:34:11.438736  590594 logs.go:282] 0 containers: []
	W1207 23:34:11.438749  590594 logs.go:284] No container was found matching "coredns"
	I1207 23:34:11.438758  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:34:11.438820  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:34:11.478579  590594 cri.go:89] found id: "f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a"
	I1207 23:34:11.478607  590594 cri.go:89] found id: ""
	I1207 23:34:11.478619  590594 logs.go:282] 1 containers: [f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a]
	I1207 23:34:11.478669  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:34:11.482646  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:34:11.482707  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:34:11.517568  590594 cri.go:89] found id: ""
	I1207 23:34:11.517596  590594 logs.go:282] 0 containers: []
	W1207 23:34:11.517607  590594 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:34:11.517616  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:34:11.517677  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:34:11.553158  590594 cri.go:89] found id: "0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df"
	I1207 23:34:11.553182  590594 cri.go:89] found id: ""
	I1207 23:34:11.553199  590594 logs.go:282] 1 containers: [0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df]
	I1207 23:34:11.553263  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:34:11.557376  590594 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:34:11.557443  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:34:11.592532  590594 cri.go:89] found id: ""
	I1207 23:34:11.592563  590594 logs.go:282] 0 containers: []
	W1207 23:34:11.592573  590594 logs.go:284] No container was found matching "kindnet"
	I1207 23:34:11.592581  590594 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:34:11.592640  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:34:11.630196  590594 cri.go:89] found id: ""
	I1207 23:34:11.630222  590594 logs.go:282] 0 containers: []
	W1207 23:34:11.630233  590594 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:34:11.630251  590594 logs.go:123] Gathering logs for kubelet ...
	I1207 23:34:11.630267  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:34:11.740523  590594 logs.go:123] Gathering logs for kube-apiserver [103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2] ...
	I1207 23:34:11.740559  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2"
	I1207 23:34:11.788804  590594 logs.go:123] Gathering logs for kube-scheduler [f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a] ...
	I1207 23:34:11.788838  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a"
	I1207 23:34:11.872841  590594 logs.go:123] Gathering logs for kube-controller-manager [0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df] ...
	I1207 23:34:11.872881  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df"
	I1207 23:34:11.915924  590594 logs.go:123] Gathering logs for dmesg ...
	I1207 23:34:11.915951  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:34:11.957690  590594 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:34:11.957733  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:34:12.019431  590594 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:34:12.019473  590594 logs.go:123] Gathering logs for etcd [f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac] ...
	I1207 23:34:12.019492  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac"
	I1207 23:34:12.054983  590594 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:34:12.055020  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:34:12.120888  590594 logs.go:123] Gathering logs for container status ...
	I1207 23:34:12.120926  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:34:14.668943  590594 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1207 23:34:14.669346  590594 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1207 23:34:14.669408  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:34:14.669472  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:34:14.704850  590594 cri.go:89] found id: "103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2"
	I1207 23:34:14.704871  590594 cri.go:89] found id: ""
	I1207 23:34:14.704879  590594 logs.go:282] 1 containers: [103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2]
	I1207 23:34:14.704924  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:34:14.708887  590594 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:34:14.708951  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:34:14.743521  590594 cri.go:89] found id: "f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac"
	I1207 23:34:14.743541  590594 cri.go:89] found id: ""
	I1207 23:34:14.743552  590594 logs.go:282] 1 containers: [f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac]
	I1207 23:34:14.743614  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:34:14.747358  590594 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:34:14.747418  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:34:14.784899  590594 cri.go:89] found id: ""
	I1207 23:34:14.784931  590594 logs.go:282] 0 containers: []
	W1207 23:34:14.784943  590594 logs.go:284] No container was found matching "coredns"
	I1207 23:34:14.784952  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:34:14.785010  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:34:14.821716  590594 cri.go:89] found id: "f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a"
	I1207 23:34:14.821741  590594 cri.go:89] found id: ""
	I1207 23:34:14.821752  590594 logs.go:282] 1 containers: [f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a]
	I1207 23:34:14.821812  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:34:14.825908  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:34:14.825975  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:34:14.861149  590594 cri.go:89] found id: ""
	I1207 23:34:14.861182  590594 logs.go:282] 0 containers: []
	W1207 23:34:14.861194  590594 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:34:14.861203  590594 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:34:14.861257  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:34:14.905305  590594 cri.go:89] found id: "0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df"
	I1207 23:34:14.905348  590594 cri.go:89] found id: ""
	I1207 23:34:14.905359  590594 logs.go:282] 1 containers: [0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df]
	I1207 23:34:14.905421  590594 ssh_runner.go:195] Run: which crictl
	I1207 23:34:14.910228  590594 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:34:14.910306  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:34:14.962398  590594 cri.go:89] found id: ""
	I1207 23:34:14.962428  590594 logs.go:282] 0 containers: []
	W1207 23:34:14.962440  590594 logs.go:284] No container was found matching "kindnet"
	I1207 23:34:14.962449  590594 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:34:14.962512  590594 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:34:15.009049  590594 cri.go:89] found id: ""
	I1207 23:34:15.009080  590594 logs.go:282] 0 containers: []
	W1207 23:34:15.009092  590594 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:34:15.009112  590594 logs.go:123] Gathering logs for kubelet ...
	I1207 23:34:15.009127  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:34:15.113212  590594 logs.go:123] Gathering logs for kube-apiserver [103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2] ...
	I1207 23:34:15.113254  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 103cb68b97a8846aff3e51fec16f4562c605bf76252bbc6e9663557718c49fc2"
	I1207 23:34:15.153352  590594 logs.go:123] Gathering logs for etcd [f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac] ...
	I1207 23:34:15.153394  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7e300c9303e5cb327ba966213f2aecc6a3ee631c0868c73f557dd0fa02dcaac"
	I1207 23:34:15.190503  590594 logs.go:123] Gathering logs for kube-scheduler [f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a] ...
	I1207 23:34:15.190545  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9777d030e42a4febc054f62f2aaa0f595845dd15342d8cb09d897b085c6753a"
	I1207 23:34:15.269367  590594 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:34:15.269407  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:34:15.317091  590594 logs.go:123] Gathering logs for container status ...
	I1207 23:34:15.317135  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:34:15.357028  590594 logs.go:123] Gathering logs for dmesg ...
	I1207 23:34:15.357063  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:34:15.392576  590594 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:34:15.392612  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:34:15.454647  590594 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:34:15.454670  590594 logs.go:123] Gathering logs for kube-controller-manager [0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df] ...
	I1207 23:34:15.454690  590594 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c92883c5a6aa0525ec30b13a1e50eb1ad545a9695e2f62b70dd04d26109f9df"
	I1207 23:34:14.030292  638483 out.go:252]   - Booting up control plane ...
	I1207 23:34:14.030454  638483 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 23:34:14.030574  638483 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 23:34:14.030652  638483 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 23:34:14.046360  638483 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 23:34:14.046548  638483 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1207 23:34:14.054234  638483 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1207 23:34:14.054443  638483 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 23:34:14.054518  638483 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1207 23:34:14.156055  638483 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1207 23:34:14.156188  638483 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1207 23:34:14.657716  638483 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.711883ms
	I1207 23:34:14.660609  638483 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1207 23:34:14.660785  638483 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1207 23:34:14.660929  638483 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1207 23:34:14.661049  638483 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1207 23:34:15.165842  638483 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 505.172865ms
	I1207 23:34:12.742361  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:34:12.742385  610371 cri.go:89] found id: ""
	I1207 23:34:12.742395  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:34:12.742463  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:12.746624  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:34:12.746705  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:34:12.778374  610371 cri.go:89] found id: ""
	I1207 23:34:12.778407  610371 logs.go:282] 0 containers: []
	W1207 23:34:12.778419  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:34:12.778428  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:34:12.778522  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:34:12.809006  610371 cri.go:89] found id: "0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d"
	I1207 23:34:12.809029  610371 cri.go:89] found id: ""
	I1207 23:34:12.809040  610371 logs.go:282] 1 containers: [0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d]
	I1207 23:34:12.809099  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:12.813410  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:34:12.813485  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:34:12.841907  610371 cri.go:89] found id: ""
	I1207 23:34:12.841941  610371 logs.go:282] 0 containers: []
	W1207 23:34:12.841954  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:34:12.841962  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:34:12.842020  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:34:12.871193  610371 cri.go:89] found id: ""
	I1207 23:34:12.871220  610371 logs.go:282] 0 containers: []
	W1207 23:34:12.871231  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:34:12.871243  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:34:12.871257  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:34:12.928486  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:34:12.928508  610371 logs.go:123] Gathering logs for kube-apiserver [4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b] ...
	I1207 23:34:12.928524  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b"
	I1207 23:34:12.960744  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:34:12.960775  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:34:12.991149  610371 logs.go:123] Gathering logs for kube-controller-manager [0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d] ...
	I1207 23:34:12.991183  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d"
	I1207 23:34:13.019694  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:34:13.019729  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:34:13.067894  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:34:13.067935  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:34:13.100204  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:34:13.100235  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:34:13.183525  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:34:13.183574  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:34:15.717288  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:34:15.717754  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:34:15.717822  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:34:15.717876  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:34:15.748660  610371 cri.go:89] found id: "4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b"
	I1207 23:34:15.748693  610371 cri.go:89] found id: ""
	I1207 23:34:15.748704  610371 logs.go:282] 1 containers: [4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b]
	I1207 23:34:15.748783  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:15.753288  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:34:15.753385  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:34:15.785757  610371 cri.go:89] found id: ""
	I1207 23:34:15.785780  610371 logs.go:282] 0 containers: []
	W1207 23:34:15.785788  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:34:15.785795  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:34:15.785844  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:34:15.820801  610371 cri.go:89] found id: ""
	I1207 23:34:15.820835  610371 logs.go:282] 0 containers: []
	W1207 23:34:15.820847  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:34:15.820856  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:34:15.820913  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:34:15.853210  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:34:15.853232  610371 cri.go:89] found id: ""
	I1207 23:34:15.853242  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:34:15.853290  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:15.857622  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:34:15.857703  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:34:15.887823  610371 cri.go:89] found id: ""
	I1207 23:34:15.887856  610371 logs.go:282] 0 containers: []
	W1207 23:34:15.887867  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:34:15.887874  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:34:15.887941  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:34:15.918649  610371 cri.go:89] found id: "0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d"
	I1207 23:34:15.918669  610371 cri.go:89] found id: ""
	I1207 23:34:15.918677  610371 logs.go:282] 1 containers: [0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d]
	I1207 23:34:15.918729  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:15.922997  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:34:15.923073  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:34:15.954602  610371 cri.go:89] found id: ""
	I1207 23:34:15.954635  610371 logs.go:282] 0 containers: []
	W1207 23:34:15.954647  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:34:15.954656  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:34:15.954723  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:34:15.986881  610371 cri.go:89] found id: ""
	I1207 23:34:15.986911  610371 logs.go:282] 0 containers: []
	W1207 23:34:15.986924  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:34:15.986939  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:34:15.986955  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:34:16.020006  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:34:16.020033  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:34:16.106565  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:34:16.106606  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:34:16.138000  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:34:16.138035  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:34:16.207568  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:34:16.207589  610371 logs.go:123] Gathering logs for kube-apiserver [4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b] ...
	I1207 23:34:16.207609  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b"
	I1207 23:34:16.238749  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:34:16.238784  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:34:16.270859  610371 logs.go:123] Gathering logs for kube-controller-manager [0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d] ...
	I1207 23:34:16.270889  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d"
	I1207 23:34:16.313508  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:34:16.313545  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:34:16.425926  638483 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.765221496s
	I1207 23:34:18.662613  638483 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001977791s
	I1207 23:34:18.681246  638483 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 23:34:18.691815  638483 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 23:34:18.702080  638483 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 23:34:18.702494  638483 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-313006 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 23:34:18.711988  638483 kubeadm.go:319] [bootstrap-token] Using token: pjkg7p.hcresufoakn80znt
	
	
	==> CRI-O <==
	Dec 07 23:34:06 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:06.702002717Z" level=info msg="Starting container: 16e5fe0d114400a17c046a8107d7fb9f074135b0e15b3525e6ad029107869dee" id=ccfd64e4-70c8-4893-8d05-42076b803517 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:34:06 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:06.704976118Z" level=info msg="Started container" PID=2163 containerID=16e5fe0d114400a17c046a8107d7fb9f074135b0e15b3525e6ad029107869dee description=kube-system/coredns-5dd5756b68-vv8vq/coredns id=ccfd64e4-70c8-4893-8d05-42076b803517 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a15247da78420a45a1390183917a9302d9704225a16ee0fea3a1550d5fd4458a
	Dec 07 23:34:10 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:10.110207246Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4968e76d-d9e5-4e3b-a102-dadc6ebc4f37 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 23:34:10 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:10.110301344Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:34:10 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:10.115953657Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:88b5694e58af8aa1a9f3223efa65e2ec7a69a4f7cc87be538f1380dae6296073 UID:2f39a4dc-d310-46a6-b18b-a82cecb43bdd NetNS:/var/run/netns/c2d05887-aa4e-4dc9-9fb5-6b9a731567f9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132cf0}] Aliases:map[]}"
	Dec 07 23:34:10 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:10.115992425Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 07 23:34:10 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:10.126458221Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:88b5694e58af8aa1a9f3223efa65e2ec7a69a4f7cc87be538f1380dae6296073 UID:2f39a4dc-d310-46a6-b18b-a82cecb43bdd NetNS:/var/run/netns/c2d05887-aa4e-4dc9-9fb5-6b9a731567f9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132cf0}] Aliases:map[]}"
	Dec 07 23:34:10 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:10.126647052Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 07 23:34:10 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:10.127561459Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 07 23:34:10 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:10.128740528Z" level=info msg="Ran pod sandbox 88b5694e58af8aa1a9f3223efa65e2ec7a69a4f7cc87be538f1380dae6296073 with infra container: default/busybox/POD" id=4968e76d-d9e5-4e3b-a102-dadc6ebc4f37 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 23:34:10 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:10.129987162Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1880f237-63f9-443b-aa7e-880f5f574021 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:34:10 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:10.130088187Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=1880f237-63f9-443b-aa7e-880f5f574021 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:34:10 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:10.13012569Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=1880f237-63f9-443b-aa7e-880f5f574021 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:34:10 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:10.130660062Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c4c9e334-e0bc-4959-8fd6-871209a1d1ce name=/runtime.v1.ImageService/PullImage
	Dec 07 23:34:10 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:10.13211073Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 07 23:34:12 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:12.226461262Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=c4c9e334-e0bc-4959-8fd6-871209a1d1ce name=/runtime.v1.ImageService/PullImage
	Dec 07 23:34:12 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:12.227386994Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0cb40c7a-de7f-4d6a-bc7e-089e7c7ad1b4 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:34:12 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:12.228875074Z" level=info msg="Creating container: default/busybox/busybox" id=5cbd78ba-41e8-445c-a5b5-6f0026235d1c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:34:12 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:12.229011853Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:34:12 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:12.233318887Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:34:12 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:12.233870496Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:34:12 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:12.262133812Z" level=info msg="Created container 9d3e08cd75addcb91205d4b60633261e43189336fcb6b510806efff7d0468708: default/busybox/busybox" id=5cbd78ba-41e8-445c-a5b5-6f0026235d1c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:34:12 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:12.262783013Z" level=info msg="Starting container: 9d3e08cd75addcb91205d4b60633261e43189336fcb6b510806efff7d0468708" id=b8b68c04-3f73-474e-b695-1d42115403af name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:34:12 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:12.264583768Z" level=info msg="Started container" PID=2240 containerID=9d3e08cd75addcb91205d4b60633261e43189336fcb6b510806efff7d0468708 description=default/busybox/busybox id=b8b68c04-3f73-474e-b695-1d42115403af name=/runtime.v1.RuntimeService/StartContainer sandboxID=88b5694e58af8aa1a9f3223efa65e2ec7a69a4f7cc87be538f1380dae6296073
	Dec 07 23:34:18 old-k8s-version-320477 crio[775]: time="2025-12-07T23:34:18.899145087Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	9d3e08cd75add       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   88b5694e58af8       busybox                                          default
	16e5fe0d11440       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 seconds ago      Running             coredns                   0                   a15247da78420       coredns-5dd5756b68-vv8vq                         kube-system
	1951ebc752e95       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   d64a8e6895bc8       storage-provisioner                              kube-system
	dfe3a6642f24f       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   1d2566f200dac       kindnet-gnv88                                    kube-system
	bfb649a5524ff       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      27 seconds ago      Running             kube-proxy                0                   92a6b4b50505b       kube-proxy-vlx4n                                 kube-system
	97fdc3cbbea87       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      45 seconds ago      Running             etcd                      0                   c0298d050a04c       etcd-old-k8s-version-320477                      kube-system
	92e0bd4d256f6       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      45 seconds ago      Running             kube-controller-manager   0                   87bab456a4b2a       kube-controller-manager-old-k8s-version-320477   kube-system
	fe51fadd5ec88       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      45 seconds ago      Running             kube-scheduler            0                   205bab9e63d57       kube-scheduler-old-k8s-version-320477            kube-system
	606c9dc9947e5       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      45 seconds ago      Running             kube-apiserver            0                   f609909695872       kube-apiserver-old-k8s-version-320477            kube-system
	
	
	==> coredns [16e5fe0d114400a17c046a8107d7fb9f074135b0e15b3525e6ad029107869dee] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54093 - 36165 "HINFO IN 8913678656799796207.6283050212054935416. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024092949s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-320477
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-320477
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=old-k8s-version-320477
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_33_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:33:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-320477
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:34:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:34:10 +0000   Sun, 07 Dec 2025 23:33:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:34:10 +0000   Sun, 07 Dec 2025 23:33:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:34:10 +0000   Sun, 07 Dec 2025 23:33:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:34:10 +0000   Sun, 07 Dec 2025 23:34:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-320477
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                94c12e17-34f4-4521-b4e4-c632ca1c3651
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-vv8vq                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-old-k8s-version-320477                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         41s
	  kube-system                 kindnet-gnv88                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-320477             250m (3%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-320477    200m (2%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-vlx4n                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-320477             100m (1%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 46s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node old-k8s-version-320477 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node old-k8s-version-320477 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node old-k8s-version-320477 status is now: NodeHasSufficientPID
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-320477 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-320477 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-320477 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node old-k8s-version-320477 event: Registered Node old-k8s-version-320477 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-320477 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.006319] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.495443] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006323] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494714] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006745] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494455] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007157] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493953] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007413] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493695] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007143] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493798] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007702] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493076] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008458] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493060] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008891] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492811] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007996] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493243] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008588] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492559] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008931] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.491699] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.010378] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	
	
	==> etcd [97fdc3cbbea8790581378967261aa815098b4a5115419bfdf764dbd8c5095c56] <==
	{"level":"info","ts":"2025-12-07T23:33:35.272388Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-07T23:33:35.272494Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-07T23:33:35.272565Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-07T23:33:35.272772Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-07T23:33:35.272857Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-07T23:33:35.853652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-07T23:33:35.853692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-07T23:33:35.853706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-12-07T23:33:35.853718Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-12-07T23:33:35.853724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-07T23:33:35.853732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-12-07T23:33:35.853753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-07T23:33:35.854781Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-07T23:33:35.855374Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-320477 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-07T23:33:35.855378Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-07T23:33:35.855407Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-07T23:33:35.855589Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-07T23:33:35.855637Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-07T23:33:35.855659Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-07T23:33:35.855686Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-07T23:33:35.855718Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-07T23:33:35.856633Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-12-07T23:33:35.856783Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-07T23:34:07.054869Z","caller":"traceutil/trace.go:171","msg":"trace[1560934076] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"142.302802ms","start":"2025-12-07T23:34:06.912546Z","end":"2025-12-07T23:34:07.054849Z","steps":["trace[1560934076] 'process raft request'  (duration: 142.246359ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-07T23:34:07.054957Z","caller":"traceutil/trace.go:171","msg":"trace[1587451938] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"142.473319ms","start":"2025-12-07T23:34:06.912468Z","end":"2025-12-07T23:34:07.054941Z","steps":["trace[1587451938] 'process raft request'  (duration: 79.900214ms)","trace[1587451938] 'compare'  (duration: 62.288721ms)"],"step_count":2}
	
	
	==> kernel <==
	 23:34:20 up  2:16,  0 user,  load average: 2.63, 2.22, 1.78
	Linux old-k8s-version-320477 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dfe3a6642f24f54b0bfa0d567c25a3b540e089b9b2a81c6e516bf4d2b2fe75e9] <==
	I1207 23:33:55.465222       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 23:33:55.465480       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1207 23:33:55.465649       1 main.go:148] setting mtu 1500 for CNI 
	I1207 23:33:55.465669       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 23:33:55.465703       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T23:33:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 23:33:55.765955       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 23:33:55.765988       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 23:33:55.766001       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 23:33:55.766157       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 23:33:56.069686       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 23:33:56.069727       1 metrics.go:72] Registering metrics
	I1207 23:33:56.069876       1 controller.go:711] "Syncing nftables rules"
	I1207 23:34:05.766400       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:34:05.766500       1 main.go:301] handling current node
	I1207 23:34:15.767437       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:34:15.767492       1 main.go:301] handling current node
	
	
	==> kube-apiserver [606c9dc9947e5f49788d176ea912ee4b3c8c17c6804407fc98ed788bc2339a67] <==
	I1207 23:33:36.918949       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1207 23:33:36.919236       1 shared_informer.go:318] Caches are synced for configmaps
	I1207 23:33:36.921171       1 controller.go:624] quota admission added evaluator for: namespaces
	I1207 23:33:36.933633       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1207 23:33:36.933661       1 aggregator.go:166] initial CRD sync complete...
	I1207 23:33:36.933669       1 autoregister_controller.go:141] Starting autoregister controller
	I1207 23:33:36.933696       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1207 23:33:36.933704       1 cache.go:39] Caches are synced for autoregister controller
	I1207 23:33:36.946519       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1207 23:33:37.120640       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 23:33:37.824575       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1207 23:33:37.828661       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1207 23:33:37.828685       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1207 23:33:38.289839       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 23:33:38.327567       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 23:33:38.429828       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1207 23:33:38.435704       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1207 23:33:38.436720       1 controller.go:624] quota admission added evaluator for: endpoints
	I1207 23:33:38.440634       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:33:38.873921       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1207 23:33:39.617121       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1207 23:33:39.626869       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1207 23:33:39.640588       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1207 23:33:52.494623       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1207 23:33:52.494862       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [92e0bd4d256f6f47b335892fc9028cc1a7e6645394adff4ea2bea725929a89c4] <==
	I1207 23:33:52.551125       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-320477" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1207 23:33:52.551252       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-old-k8s-version-320477" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1207 23:33:52.554217       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vv8vq"
	I1207 23:33:52.560140       1 shared_informer.go:318] Caches are synced for HPA
	I1207 23:33:52.563764       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="39.148132ms"
	I1207 23:33:52.576493       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.671147ms"
	I1207 23:33:52.576610       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.435µs"
	I1207 23:33:52.595519       1 shared_informer.go:318] Caches are synced for resource quota
	I1207 23:33:52.666195       1 shared_informer.go:318] Caches are synced for resource quota
	I1207 23:33:52.673880       1 shared_informer.go:318] Caches are synced for attach detach
	I1207 23:33:52.741615       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1207 23:33:53.049292       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1207 23:33:53.058915       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-q66fs"
	I1207 23:33:53.066215       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.235476ms"
	I1207 23:33:53.073009       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.726799ms"
	I1207 23:33:53.073162       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.329µs"
	I1207 23:33:53.087726       1 shared_informer.go:318] Caches are synced for garbage collector
	I1207 23:33:53.101093       1 shared_informer.go:318] Caches are synced for garbage collector
	I1207 23:33:53.101124       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1207 23:34:06.339156       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.703µs"
	I1207 23:34:06.357489       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="103.502µs"
	I1207 23:34:06.910002       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="108.543µs"
	I1207 23:34:07.537476       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1207 23:34:07.791252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.812985ms"
	I1207 23:34:07.791607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.841µs"
	
	
	==> kube-proxy [bfb649a5524ff1c35bcbb6d33548131ae0dca0ebe529e5b22c6eccc99f18fce4] <==
	I1207 23:33:52.949270       1 server_others.go:69] "Using iptables proxy"
	I1207 23:33:52.960628       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1207 23:33:52.996581       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:33:53.000732       1 server_others.go:152] "Using iptables Proxier"
	I1207 23:33:53.000780       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1207 23:33:53.000790       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1207 23:33:53.000835       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1207 23:33:53.001119       1 server.go:846] "Version info" version="v1.28.0"
	I1207 23:33:53.001144       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:33:53.001798       1 config.go:97] "Starting endpoint slice config controller"
	I1207 23:33:53.001836       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1207 23:33:53.001863       1 config.go:188] "Starting service config controller"
	I1207 23:33:53.001869       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1207 23:33:53.002096       1 config.go:315] "Starting node config controller"
	I1207 23:33:53.003058       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1207 23:33:53.102027       1 shared_informer.go:318] Caches are synced for service config
	I1207 23:33:53.102037       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1207 23:33:53.104030       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [fe51fadd5ec881788c14ff2a379ae26f41764c16893ff3060a412eb8862ca921] <==
	E1207 23:33:36.868397       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1207 23:33:36.868393       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1207 23:33:36.868410       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1207 23:33:36.868415       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1207 23:33:36.868563       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1207 23:33:36.868582       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1207 23:33:36.868606       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1207 23:33:36.868586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1207 23:33:37.809177       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1207 23:33:37.809216       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 23:33:37.835887       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1207 23:33:37.835920       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1207 23:33:37.841190       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1207 23:33:37.841219       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1207 23:33:37.863823       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1207 23:33:37.863868       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1207 23:33:37.871301       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1207 23:33:37.871345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1207 23:33:37.942034       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1207 23:33:37.942070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1207 23:33:38.048827       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1207 23:33:38.048866       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1207 23:33:38.110549       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1207 23:33:38.110586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1207 23:33:40.064426       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 07 23:33:52 old-k8s-version-320477 kubelet[1401]: I1207 23:33:52.512970    1401 topology_manager.go:215] "Topology Admit Handler" podUID="90472b53-7730-44fa-80cc-96a20875ede5" podNamespace="kube-system" podName="kindnet-gnv88"
	Dec 07 23:33:52 old-k8s-version-320477 kubelet[1401]: I1207 23:33:52.564507    1401 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 07 23:33:52 old-k8s-version-320477 kubelet[1401]: I1207 23:33:52.565420    1401 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 07 23:33:52 old-k8s-version-320477 kubelet[1401]: I1207 23:33:52.571452    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cee2f481-4ff2-4dc0-acf0-40f24977a61c-kube-proxy\") pod \"kube-proxy-vlx4n\" (UID: \"cee2f481-4ff2-4dc0-acf0-40f24977a61c\") " pod="kube-system/kube-proxy-vlx4n"
	Dec 07 23:33:52 old-k8s-version-320477 kubelet[1401]: I1207 23:33:52.571528    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cee2f481-4ff2-4dc0-acf0-40f24977a61c-xtables-lock\") pod \"kube-proxy-vlx4n\" (UID: \"cee2f481-4ff2-4dc0-acf0-40f24977a61c\") " pod="kube-system/kube-proxy-vlx4n"
	Dec 07 23:33:52 old-k8s-version-320477 kubelet[1401]: I1207 23:33:52.571559    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/90472b53-7730-44fa-80cc-96a20875ede5-cni-cfg\") pod \"kindnet-gnv88\" (UID: \"90472b53-7730-44fa-80cc-96a20875ede5\") " pod="kube-system/kindnet-gnv88"
	Dec 07 23:33:52 old-k8s-version-320477 kubelet[1401]: I1207 23:33:52.571595    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90472b53-7730-44fa-80cc-96a20875ede5-lib-modules\") pod \"kindnet-gnv88\" (UID: \"90472b53-7730-44fa-80cc-96a20875ede5\") " pod="kube-system/kindnet-gnv88"
	Dec 07 23:33:52 old-k8s-version-320477 kubelet[1401]: I1207 23:33:52.571629    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbm57\" (UniqueName: \"kubernetes.io/projected/90472b53-7730-44fa-80cc-96a20875ede5-kube-api-access-qbm57\") pod \"kindnet-gnv88\" (UID: \"90472b53-7730-44fa-80cc-96a20875ede5\") " pod="kube-system/kindnet-gnv88"
	Dec 07 23:33:52 old-k8s-version-320477 kubelet[1401]: I1207 23:33:52.571733    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cee2f481-4ff2-4dc0-acf0-40f24977a61c-lib-modules\") pod \"kube-proxy-vlx4n\" (UID: \"cee2f481-4ff2-4dc0-acf0-40f24977a61c\") " pod="kube-system/kube-proxy-vlx4n"
	Dec 07 23:33:52 old-k8s-version-320477 kubelet[1401]: I1207 23:33:52.571858    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28ds6\" (UniqueName: \"kubernetes.io/projected/cee2f481-4ff2-4dc0-acf0-40f24977a61c-kube-api-access-28ds6\") pod \"kube-proxy-vlx4n\" (UID: \"cee2f481-4ff2-4dc0-acf0-40f24977a61c\") " pod="kube-system/kube-proxy-vlx4n"
	Dec 07 23:33:52 old-k8s-version-320477 kubelet[1401]: I1207 23:33:52.571908    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90472b53-7730-44fa-80cc-96a20875ede5-xtables-lock\") pod \"kindnet-gnv88\" (UID: \"90472b53-7730-44fa-80cc-96a20875ede5\") " pod="kube-system/kindnet-gnv88"
	Dec 07 23:33:53 old-k8s-version-320477 kubelet[1401]: I1207 23:33:53.744410    1401 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-vlx4n" podStartSLOduration=1.744359673 podCreationTimestamp="2025-12-07 23:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:33:53.744173566 +0000 UTC m=+14.152241482" watchObservedRunningTime="2025-12-07 23:33:53.744359673 +0000 UTC m=+14.152427588"
	Dec 07 23:33:55 old-k8s-version-320477 kubelet[1401]: I1207 23:33:55.751754    1401 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-gnv88" podStartSLOduration=1.28181207 podCreationTimestamp="2025-12-07 23:33:52 +0000 UTC" firstStartedPulling="2025-12-07 23:33:52.828009107 +0000 UTC m=+13.236077016" lastFinishedPulling="2025-12-07 23:33:55.297896733 +0000 UTC m=+15.705964636" observedRunningTime="2025-12-07 23:33:55.751662587 +0000 UTC m=+16.159730504" watchObservedRunningTime="2025-12-07 23:33:55.75169969 +0000 UTC m=+16.159767605"
	Dec 07 23:34:06 old-k8s-version-320477 kubelet[1401]: I1207 23:34:06.311795    1401 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 07 23:34:06 old-k8s-version-320477 kubelet[1401]: I1207 23:34:06.336635    1401 topology_manager.go:215] "Topology Admit Handler" podUID="3252d094-8849-4585-9065-1f6e312af8cd" podNamespace="kube-system" podName="storage-provisioner"
	Dec 07 23:34:06 old-k8s-version-320477 kubelet[1401]: I1207 23:34:06.338855    1401 topology_manager.go:215] "Topology Admit Handler" podUID="36c9ee97-e1e3-4323-a423-698ebc1b76e5" podNamespace="kube-system" podName="coredns-5dd5756b68-vv8vq"
	Dec 07 23:34:06 old-k8s-version-320477 kubelet[1401]: I1207 23:34:06.365267    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36c9ee97-e1e3-4323-a423-698ebc1b76e5-config-volume\") pod \"coredns-5dd5756b68-vv8vq\" (UID: \"36c9ee97-e1e3-4323-a423-698ebc1b76e5\") " pod="kube-system/coredns-5dd5756b68-vv8vq"
	Dec 07 23:34:06 old-k8s-version-320477 kubelet[1401]: I1207 23:34:06.365360    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74fx2\" (UniqueName: \"kubernetes.io/projected/36c9ee97-e1e3-4323-a423-698ebc1b76e5-kube-api-access-74fx2\") pod \"coredns-5dd5756b68-vv8vq\" (UID: \"36c9ee97-e1e3-4323-a423-698ebc1b76e5\") " pod="kube-system/coredns-5dd5756b68-vv8vq"
	Dec 07 23:34:06 old-k8s-version-320477 kubelet[1401]: I1207 23:34:06.365397    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3252d094-8849-4585-9065-1f6e312af8cd-tmp\") pod \"storage-provisioner\" (UID: \"3252d094-8849-4585-9065-1f6e312af8cd\") " pod="kube-system/storage-provisioner"
	Dec 07 23:34:06 old-k8s-version-320477 kubelet[1401]: I1207 23:34:06.365435    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gttn6\" (UniqueName: \"kubernetes.io/projected/3252d094-8849-4585-9065-1f6e312af8cd-kube-api-access-gttn6\") pod \"storage-provisioner\" (UID: \"3252d094-8849-4585-9065-1f6e312af8cd\") " pod="kube-system/storage-provisioner"
	Dec 07 23:34:06 old-k8s-version-320477 kubelet[1401]: I1207 23:34:06.814253    1401 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.814175318 podCreationTimestamp="2025-12-07 23:33:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:34:06.814054567 +0000 UTC m=+27.222122481" watchObservedRunningTime="2025-12-07 23:34:06.814175318 +0000 UTC m=+27.222243431"
	Dec 07 23:34:06 old-k8s-version-320477 kubelet[1401]: I1207 23:34:06.909883    1401 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vv8vq" podStartSLOduration=14.909827319 podCreationTimestamp="2025-12-07 23:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:34:06.909319413 +0000 UTC m=+27.317387329" watchObservedRunningTime="2025-12-07 23:34:06.909827319 +0000 UTC m=+27.317895233"
	Dec 07 23:34:09 old-k8s-version-320477 kubelet[1401]: I1207 23:34:09.808078    1401 topology_manager.go:215] "Topology Admit Handler" podUID="2f39a4dc-d310-46a6-b18b-a82cecb43bdd" podNamespace="default" podName="busybox"
	Dec 07 23:34:09 old-k8s-version-320477 kubelet[1401]: I1207 23:34:09.885789    1401 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qftq5\" (UniqueName: \"kubernetes.io/projected/2f39a4dc-d310-46a6-b18b-a82cecb43bdd-kube-api-access-qftq5\") pod \"busybox\" (UID: \"2f39a4dc-d310-46a6-b18b-a82cecb43bdd\") " pod="default/busybox"
	Dec 07 23:34:12 old-k8s-version-320477 kubelet[1401]: I1207 23:34:12.795722    1401 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.699147663 podCreationTimestamp="2025-12-07 23:34:09 +0000 UTC" firstStartedPulling="2025-12-07 23:34:10.13031181 +0000 UTC m=+30.538379708" lastFinishedPulling="2025-12-07 23:34:12.226827858 +0000 UTC m=+32.634895765" observedRunningTime="2025-12-07 23:34:12.795032746 +0000 UTC m=+33.203100662" watchObservedRunningTime="2025-12-07 23:34:12.79566372 +0000 UTC m=+33.203731634"
	
	
	==> storage-provisioner [1951ebc752e956e0c74cd6172d1e90b9abe2752010748db37ca949dd098526eb] <==
	I1207 23:34:06.710632       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 23:34:06.722888       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 23:34:06.722967       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1207 23:34:06.731850       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 23:34:06.732471       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-320477_c0d007ae-f702-4626-ba2f-b78653a420f8!
	I1207 23:34:06.731989       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9ac3ae20-044f-4c8f-a42d-d1ab1a68535f", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-320477_c0d007ae-f702-4626-ba2f-b78653a420f8 became leader
	I1207 23:34:06.833581       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-320477_c0d007ae-f702-4626-ba2f-b78653a420f8!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-320477 -n old-k8s-version-320477
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-320477 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-313006 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-313006 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (255.318105ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:34:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-313006 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-313006 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-313006 describe deploy/metrics-server -n kube-system: exit status 1 (62.21234ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-313006 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-313006
helpers_test.go:243: (dbg) docker inspect no-preload-313006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f2f71b478561f7677a512d83b239743d3a12195edf06004fa5e71d67fe6faa28",
	        "Created": "2025-12-07T23:33:56.743918699Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 639000,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:33:56.779078869Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/f2f71b478561f7677a512d83b239743d3a12195edf06004fa5e71d67fe6faa28/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f2f71b478561f7677a512d83b239743d3a12195edf06004fa5e71d67fe6faa28/hostname",
	        "HostsPath": "/var/lib/docker/containers/f2f71b478561f7677a512d83b239743d3a12195edf06004fa5e71d67fe6faa28/hosts",
	        "LogPath": "/var/lib/docker/containers/f2f71b478561f7677a512d83b239743d3a12195edf06004fa5e71d67fe6faa28/f2f71b478561f7677a512d83b239743d3a12195edf06004fa5e71d67fe6faa28-json.log",
	        "Name": "/no-preload-313006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-313006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-313006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f2f71b478561f7677a512d83b239743d3a12195edf06004fa5e71d67fe6faa28",
	                "LowerDir": "/var/lib/docker/overlay2/3127bde15e4dc2f4657d8e4018b5da1f90b377ad2f68b2bb2e943541b2587371-init/diff:/var/lib/docker/overlay2/d2e9c5481c0f5ed3745e4b3c85b207e8e3f273f5a1d285f7bc7bfa20976ad16e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3127bde15e4dc2f4657d8e4018b5da1f90b377ad2f68b2bb2e943541b2587371/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3127bde15e4dc2f4657d8e4018b5da1f90b377ad2f68b2bb2e943541b2587371/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3127bde15e4dc2f4657d8e4018b5da1f90b377ad2f68b2bb2e943541b2587371/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-313006",
	                "Source": "/var/lib/docker/volumes/no-preload-313006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-313006",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-313006",
	                "name.minikube.sigs.k8s.io": "no-preload-313006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8d4f927c669bf4235d6acb031a2f93658d21a0a5b0bdd17917fa716eab83fce1",
	            "SandboxKey": "/var/run/docker/netns/8d4f927c669b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-313006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "357321d5a31d4d37dba08f8b7360dac5f2baa6c86fc4940023c2b5c75f1a37a8",
	                    "EndpointID": "b6ba8d84e6e102af8acc3cdd569cd7a6192b40c482546f0d3bf7896e57bceadf",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "66:be:ab:e6:03:52",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-313006",
	                        "f2f71b478561"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-313006 -n no-preload-313006
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-313006 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-313006 logs -n 25: (1.026528911s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-600852 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo containerd config dump                                                                                                                                                                                                  │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo crio config                                                                                                                                                                                                             │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ delete  │ -p cilium-600852                                                                                                                                                                                                                              │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:33 UTC │
	│ start   │ -p old-k8s-version-320477 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-320477 │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:34 UTC │
	│ start   │ -p cert-expiration-612608 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-612608 │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:33 UTC │
	│ delete  │ -p cert-expiration-612608                                                                                                                                                                                                                     │ cert-expiration-612608 │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:33 UTC │
	│ start   │ -p no-preload-313006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-313006      │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:34 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-320477 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-320477 │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │                     │
	│ stop    │ -p old-k8s-version-320477 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-320477 │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:34 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-320477 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-320477 │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:34 UTC │
	│ start   │ -p old-k8s-version-320477 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-320477 │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │                     │
	│ delete  │ -p stopped-upgrade-604160                                                                                                                                                                                                                     │ stopped-upgrade-604160 │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:34 UTC │
	│ start   │ -p embed-certs-654118 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-654118     │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-313006 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-313006      │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:34:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:34:40.376815  648820 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:34:40.376913  648820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:34:40.376920  648820 out.go:374] Setting ErrFile to fd 2...
	I1207 23:34:40.376925  648820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:34:40.377142  648820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:34:40.377697  648820 out.go:368] Setting JSON to false
	I1207 23:34:40.378956  648820 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8224,"bootTime":1765142256,"procs":311,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:34:40.379025  648820 start.go:143] virtualization: kvm guest
	I1207 23:34:40.381495  648820 out.go:179] * [embed-certs-654118] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:34:40.383057  648820 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:34:40.383076  648820 notify.go:221] Checking for updates...
	I1207 23:34:40.385639  648820 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:34:40.386927  648820 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:34:40.388023  648820 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:34:40.389504  648820 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:34:40.390877  648820 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:34:40.392568  648820 config.go:182] Loaded profile config "kubernetes-upgrade-703538": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:34:40.392677  648820 config.go:182] Loaded profile config "no-preload-313006": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:34:40.392761  648820 config.go:182] Loaded profile config "old-k8s-version-320477": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1207 23:34:40.392854  648820 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:34:40.420815  648820 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:34:40.420932  648820 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:34:40.484864  648820 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-07 23:34:40.472703965 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:34:40.485012  648820 docker.go:319] overlay module found
	I1207 23:34:40.486796  648820 out.go:179] * Using the docker driver based on user configuration
	I1207 23:34:40.487888  648820 start.go:309] selected driver: docker
	I1207 23:34:40.487903  648820 start.go:927] validating driver "docker" against <nil>
	I1207 23:34:40.487917  648820 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:34:40.488782  648820 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:34:40.549053  648820 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-07 23:34:40.538003935 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:34:40.549237  648820 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1207 23:34:40.549538  648820 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:34:40.551445  648820 out.go:179] * Using Docker driver with root privileges
	I1207 23:34:40.553136  648820 cni.go:84] Creating CNI manager for ""
	I1207 23:34:40.553220  648820 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:34:40.553235  648820 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1207 23:34:40.553340  648820 start.go:353] cluster config:
	{Name:embed-certs-654118 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-654118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:34:40.554757  648820 out.go:179] * Starting "embed-certs-654118" primary control-plane node in "embed-certs-654118" cluster
	I1207 23:34:40.555866  648820 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:34:40.556974  648820 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:34:40.558167  648820 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:34:40.558204  648820 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1207 23:34:40.558212  648820 cache.go:65] Caching tarball of preloaded images
	I1207 23:34:40.558276  648820 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:34:40.558303  648820 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:34:40.558344  648820 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1207 23:34:40.558471  648820 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/config.json ...
	I1207 23:34:40.558499  648820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/config.json: {Name:mkf56b72fcd505db0a9bc9e4cc13b521976c3649 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:34:40.580487  648820 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:34:40.580510  648820 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:34:40.580528  648820 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:34:40.580565  648820 start.go:360] acquireMachinesLock for embed-certs-654118: {Name:mk7c4d25ea4936301d1a96de829bb052643e31a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:34:40.580669  648820 start.go:364] duration metric: took 85.628µs to acquireMachinesLock for "embed-certs-654118"
	I1207 23:34:40.580713  648820 start.go:93] Provisioning new machine with config: &{Name:embed-certs-654118 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-654118 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:34:40.580932  648820 start.go:125] createHost starting for "" (driver="docker")
	W1207 23:34:37.290122  638483 node_ready.go:57] node "no-preload-313006" has "Ready":"False" status (will retry)
	I1207 23:34:39.290068  638483 node_ready.go:49] node "no-preload-313006" is "Ready"
	I1207 23:34:39.290098  638483 node_ready.go:38] duration metric: took 13.503033091s for node "no-preload-313006" to be "Ready" ...
	I1207 23:34:39.290112  638483 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:34:39.290182  638483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:34:39.303251  638483 api_server.go:72] duration metric: took 13.7975959s to wait for apiserver process to appear ...
	I1207 23:34:39.303281  638483 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:34:39.303306  638483 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1207 23:34:39.308068  638483 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1207 23:34:39.309119  638483 api_server.go:141] control plane version: v1.35.0-beta.0
	I1207 23:34:39.309148  638483 api_server.go:131] duration metric: took 5.857719ms to wait for apiserver health ...
	I1207 23:34:39.309160  638483 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:34:39.312910  638483 system_pods.go:59] 8 kube-system pods found
	I1207 23:34:39.312947  638483 system_pods.go:61] "coredns-7d764666f9-btjrp" [c81bd338-0a5e-4937-8442-bbacd5f685c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:34:39.312954  638483 system_pods.go:61] "etcd-no-preload-313006" [2124ac32-ed11-49d4-b522-e0bb8b268bb1] Running
	I1207 23:34:39.312960  638483 system_pods.go:61] "kindnet-nzf5r" [8d7ee556-9db1-49ce-a52b-403f54085f1f] Running
	I1207 23:34:39.312963  638483 system_pods.go:61] "kube-apiserver-no-preload-313006" [3c161ca5-34a9-4712-8eb3-6d444b18fae0] Running
	I1207 23:34:39.312971  638483 system_pods.go:61] "kube-controller-manager-no-preload-313006" [8b681c4d-7203-410e-a987-5f988f352aed] Running
	I1207 23:34:39.312977  638483 system_pods.go:61] "kube-proxy-xw4pf" [ebc0bfad-9d66-4e97-ba23-878bf95416a6] Running
	I1207 23:34:39.312980  638483 system_pods.go:61] "kube-scheduler-no-preload-313006" [40d9aeaa-01fd-49cc-9e20-4339df06b915] Running
	I1207 23:34:39.312984  638483 system_pods.go:61] "storage-provisioner" [9c75fba7-bec3-421e-9f99-b51827afb29d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:34:39.312992  638483 system_pods.go:74] duration metric: took 3.826178ms to wait for pod list to return data ...
	I1207 23:34:39.312999  638483 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:34:39.315350  638483 default_sa.go:45] found service account: "default"
	I1207 23:34:39.315373  638483 default_sa.go:55] duration metric: took 2.367911ms for default service account to be created ...
	I1207 23:34:39.315383  638483 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 23:34:39.318113  638483 system_pods.go:86] 8 kube-system pods found
	I1207 23:34:39.318144  638483 system_pods.go:89] "coredns-7d764666f9-btjrp" [c81bd338-0a5e-4937-8442-bbacd5f685c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:34:39.318152  638483 system_pods.go:89] "etcd-no-preload-313006" [2124ac32-ed11-49d4-b522-e0bb8b268bb1] Running
	I1207 23:34:39.318160  638483 system_pods.go:89] "kindnet-nzf5r" [8d7ee556-9db1-49ce-a52b-403f54085f1f] Running
	I1207 23:34:39.318166  638483 system_pods.go:89] "kube-apiserver-no-preload-313006" [3c161ca5-34a9-4712-8eb3-6d444b18fae0] Running
	I1207 23:34:39.318172  638483 system_pods.go:89] "kube-controller-manager-no-preload-313006" [8b681c4d-7203-410e-a987-5f988f352aed] Running
	I1207 23:34:39.318177  638483 system_pods.go:89] "kube-proxy-xw4pf" [ebc0bfad-9d66-4e97-ba23-878bf95416a6] Running
	I1207 23:34:39.318183  638483 system_pods.go:89] "kube-scheduler-no-preload-313006" [40d9aeaa-01fd-49cc-9e20-4339df06b915] Running
	I1207 23:34:39.318196  638483 system_pods.go:89] "storage-provisioner" [9c75fba7-bec3-421e-9f99-b51827afb29d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:34:39.318215  638483 retry.go:31] will retry after 253.43657ms: missing components: kube-dns
	I1207 23:34:39.576264  638483 system_pods.go:86] 8 kube-system pods found
	I1207 23:34:39.576294  638483 system_pods.go:89] "coredns-7d764666f9-btjrp" [c81bd338-0a5e-4937-8442-bbacd5f685c2] Running
	I1207 23:34:39.576300  638483 system_pods.go:89] "etcd-no-preload-313006" [2124ac32-ed11-49d4-b522-e0bb8b268bb1] Running
	I1207 23:34:39.576304  638483 system_pods.go:89] "kindnet-nzf5r" [8d7ee556-9db1-49ce-a52b-403f54085f1f] Running
	I1207 23:34:39.576308  638483 system_pods.go:89] "kube-apiserver-no-preload-313006" [3c161ca5-34a9-4712-8eb3-6d444b18fae0] Running
	I1207 23:34:39.576314  638483 system_pods.go:89] "kube-controller-manager-no-preload-313006" [8b681c4d-7203-410e-a987-5f988f352aed] Running
	I1207 23:34:39.576318  638483 system_pods.go:89] "kube-proxy-xw4pf" [ebc0bfad-9d66-4e97-ba23-878bf95416a6] Running
	I1207 23:34:39.576334  638483 system_pods.go:89] "kube-scheduler-no-preload-313006" [40d9aeaa-01fd-49cc-9e20-4339df06b915] Running
	I1207 23:34:39.576340  638483 system_pods.go:89] "storage-provisioner" [9c75fba7-bec3-421e-9f99-b51827afb29d] Running
	I1207 23:34:39.576351  638483 system_pods.go:126] duration metric: took 260.960708ms to wait for k8s-apps to be running ...
	I1207 23:34:39.576365  638483 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:34:39.576413  638483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:34:39.589701  638483 system_svc.go:56] duration metric: took 13.325296ms WaitForService to wait for kubelet
	I1207 23:34:39.589732  638483 kubeadm.go:587] duration metric: took 14.084084362s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:34:39.589755  638483 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:34:39.592397  638483 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:34:39.592423  638483 node_conditions.go:123] node cpu capacity is 8
	I1207 23:34:39.592437  638483 node_conditions.go:105] duration metric: took 2.676992ms to run NodePressure ...
	I1207 23:34:39.592449  638483 start.go:242] waiting for startup goroutines ...
	I1207 23:34:39.592455  638483 start.go:247] waiting for cluster config update ...
	I1207 23:34:39.592466  638483 start.go:256] writing updated cluster config ...
	I1207 23:34:39.592720  638483 ssh_runner.go:195] Run: rm -f paused
	I1207 23:34:39.596487  638483 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:34:39.599983  638483 pod_ready.go:83] waiting for pod "coredns-7d764666f9-btjrp" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:39.604099  638483 pod_ready.go:94] pod "coredns-7d764666f9-btjrp" is "Ready"
	I1207 23:34:39.604122  638483 pod_ready.go:86] duration metric: took 4.118259ms for pod "coredns-7d764666f9-btjrp" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:39.606054  638483 pod_ready.go:83] waiting for pod "etcd-no-preload-313006" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:39.609666  638483 pod_ready.go:94] pod "etcd-no-preload-313006" is "Ready"
	I1207 23:34:39.609694  638483 pod_ready.go:86] duration metric: took 3.615771ms for pod "etcd-no-preload-313006" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:39.611369  638483 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-313006" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:39.614973  638483 pod_ready.go:94] pod "kube-apiserver-no-preload-313006" is "Ready"
	I1207 23:34:39.614995  638483 pod_ready.go:86] duration metric: took 3.602739ms for pod "kube-apiserver-no-preload-313006" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:39.616724  638483 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-313006" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:40.000912  638483 pod_ready.go:94] pod "kube-controller-manager-no-preload-313006" is "Ready"
	I1207 23:34:40.000938  638483 pod_ready.go:86] duration metric: took 384.194683ms for pod "kube-controller-manager-no-preload-313006" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:40.201372  638483 pod_ready.go:83] waiting for pod "kube-proxy-xw4pf" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:40.600857  638483 pod_ready.go:94] pod "kube-proxy-xw4pf" is "Ready"
	I1207 23:34:40.600886  638483 pod_ready.go:86] duration metric: took 399.489355ms for pod "kube-proxy-xw4pf" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:40.801081  638483 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-313006" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:41.201312  638483 pod_ready.go:94] pod "kube-scheduler-no-preload-313006" is "Ready"
	I1207 23:34:41.201369  638483 pod_ready.go:86] duration metric: took 400.254643ms for pod "kube-scheduler-no-preload-313006" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:34:41.201391  638483 pod_ready.go:40] duration metric: took 1.604869595s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:34:41.256992  638483 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1207 23:34:41.258720  638483 out.go:179] * Done! kubectl is now configured to use "no-preload-313006" cluster and "default" namespace by default
	I1207 23:34:37.876588  647748 out.go:252] * Restarting existing docker container for "old-k8s-version-320477" ...
	I1207 23:34:37.876674  647748 cli_runner.go:164] Run: docker start old-k8s-version-320477
	I1207 23:34:38.151370  647748 cli_runner.go:164] Run: docker container inspect old-k8s-version-320477 --format={{.State.Status}}
	I1207 23:34:38.173453  647748 kic.go:430] container "old-k8s-version-320477" state is running.
	I1207 23:34:38.173813  647748 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-320477
	I1207 23:34:38.194841  647748 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/old-k8s-version-320477/config.json ...
	I1207 23:34:38.195128  647748 machine.go:94] provisionDockerMachine start ...
	I1207 23:34:38.195241  647748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-320477
	I1207 23:34:38.217757  647748 main.go:143] libmachine: Using SSH client type: native
	I1207 23:34:38.218121  647748 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1207 23:34:38.218140  647748 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:34:38.219028  647748 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55982->127.0.0.1:33438: read: connection reset by peer
	I1207 23:34:41.363994  647748 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-320477
	
	I1207 23:34:41.364026  647748 ubuntu.go:182] provisioning hostname "old-k8s-version-320477"
	I1207 23:34:41.364086  647748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-320477
	I1207 23:34:41.389233  647748 main.go:143] libmachine: Using SSH client type: native
	I1207 23:34:41.389508  647748 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1207 23:34:41.389531  647748 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-320477 && echo "old-k8s-version-320477" | sudo tee /etc/hostname
	I1207 23:34:41.538921  647748 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-320477
	
	I1207 23:34:41.538997  647748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-320477
	I1207 23:34:41.560443  647748 main.go:143] libmachine: Using SSH client type: native
	I1207 23:34:41.560791  647748 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1207 23:34:41.560827  647748 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-320477' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-320477/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-320477' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:34:41.690445  647748 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:34:41.690475  647748 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:34:41.690508  647748 ubuntu.go:190] setting up certificates
	I1207 23:34:41.690517  647748 provision.go:84] configureAuth start
	I1207 23:34:41.690586  647748 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-320477
	I1207 23:34:41.709747  647748 provision.go:143] copyHostCerts
	I1207 23:34:41.709810  647748 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:34:41.709822  647748 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:34:41.709884  647748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:34:41.710000  647748 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:34:41.710017  647748 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:34:41.710061  647748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:34:41.710143  647748 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:34:41.710153  647748 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:34:41.710188  647748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:34:41.710273  647748 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-320477 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-320477]
	I1207 23:34:41.846131  647748 provision.go:177] copyRemoteCerts
	I1207 23:34:41.846201  647748 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:34:41.846245  647748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-320477
	I1207 23:34:41.865850  647748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/old-k8s-version-320477/id_rsa Username:docker}
	I1207 23:34:41.961868  647748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1207 23:34:41.980828  647748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 23:34:42.000788  647748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:34:42.020537  647748 provision.go:87] duration metric: took 330.006712ms to configureAuth
	I1207 23:34:42.020566  647748 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:34:42.020758  647748 config.go:182] Loaded profile config "old-k8s-version-320477": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1207 23:34:42.020869  647748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-320477
	I1207 23:34:42.041926  647748 main.go:143] libmachine: Using SSH client type: native
	I1207 23:34:42.042256  647748 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1207 23:34:42.042283  647748 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:34:40.196440  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1207 23:34:40.196520  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:34:40.196583  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:34:40.225863  610371 cri.go:89] found id: "a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:34:40.225885  610371 cri.go:89] found id: "4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b"
	I1207 23:34:40.225889  610371 cri.go:89] found id: ""
	I1207 23:34:40.225898  610371 logs.go:282] 2 containers: [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96 4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b]
	I1207 23:34:40.225961  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:40.230116  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:40.233979  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:34:40.234045  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:34:40.264871  610371 cri.go:89] found id: ""
	I1207 23:34:40.264899  610371 logs.go:282] 0 containers: []
	W1207 23:34:40.264912  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:34:40.264920  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:34:40.264985  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:34:40.293841  610371 cri.go:89] found id: ""
	I1207 23:34:40.293871  610371 logs.go:282] 0 containers: []
	W1207 23:34:40.293882  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:34:40.293891  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:34:40.293948  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:34:40.322663  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:34:40.322691  610371 cri.go:89] found id: ""
	I1207 23:34:40.322703  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:34:40.322758  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:40.327119  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:34:40.327200  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:34:40.356749  610371 cri.go:89] found id: ""
	I1207 23:34:40.356771  610371 logs.go:282] 0 containers: []
	W1207 23:34:40.356784  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:34:40.356791  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:34:40.356838  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:34:40.387220  610371 cri.go:89] found id: "0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d"
	I1207 23:34:40.387243  610371 cri.go:89] found id: ""
	I1207 23:34:40.387255  610371 logs.go:282] 1 containers: [0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d]
	I1207 23:34:40.387314  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:40.391435  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:34:40.391493  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:34:40.423308  610371 cri.go:89] found id: ""
	I1207 23:34:40.423369  610371 logs.go:282] 0 containers: []
	W1207 23:34:40.423383  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:34:40.423398  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:34:40.423455  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:34:40.454014  610371 cri.go:89] found id: ""
	I1207 23:34:40.454056  610371 logs.go:282] 0 containers: []
	W1207 23:34:40.454068  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:34:40.454096  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:34:40.454111  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:34:40.557090  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:34:40.557126  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:34:40.590224  610371 logs.go:123] Gathering logs for kube-apiserver [4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b] ...
	I1207 23:34:40.590272  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4e22e1380c9b43b8d80f32d8ae276e6690c8b6998c7088f526f9fd3f1a50ba1b"
	I1207 23:34:40.624836  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:34:40.624867  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1207 23:34:40.583045  648820 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1207 23:34:40.583285  648820 start.go:159] libmachine.API.Create for "embed-certs-654118" (driver="docker")
	I1207 23:34:40.583333  648820 client.go:173] LocalClient.Create starting
	I1207 23:34:40.583406  648820 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem
	I1207 23:34:40.583448  648820 main.go:143] libmachine: Decoding PEM data...
	I1207 23:34:40.583473  648820 main.go:143] libmachine: Parsing certificate...
	I1207 23:34:40.583546  648820 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem
	I1207 23:34:40.583584  648820 main.go:143] libmachine: Decoding PEM data...
	I1207 23:34:40.583605  648820 main.go:143] libmachine: Parsing certificate...
	I1207 23:34:40.583966  648820 cli_runner.go:164] Run: docker network inspect embed-certs-654118 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1207 23:34:40.602534  648820 cli_runner.go:211] docker network inspect embed-certs-654118 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1207 23:34:40.602627  648820 network_create.go:284] running [docker network inspect embed-certs-654118] to gather additional debugging logs...
	I1207 23:34:40.602655  648820 cli_runner.go:164] Run: docker network inspect embed-certs-654118
	W1207 23:34:40.622457  648820 cli_runner.go:211] docker network inspect embed-certs-654118 returned with exit code 1
	I1207 23:34:40.622493  648820 network_create.go:287] error running [docker network inspect embed-certs-654118]: docker network inspect embed-certs-654118: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-654118 not found
	I1207 23:34:40.622510  648820 network_create.go:289] output of [docker network inspect embed-certs-654118]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-654118 not found
	
	** /stderr **
	I1207 23:34:40.622669  648820 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:34:40.642096  648820 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-918c8f4f6e86 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:f0:02:fe:94:4b} reservation:<nil>}
	I1207 23:34:40.642968  648820 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ce07fb07c16c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:d2:35:46:a2:0a} reservation:<nil>}
	I1207 23:34:40.643525  648820 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f198eadca31e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:79:39:d6:10:dc} reservation:<nil>}
	I1207 23:34:40.644090  648820 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0a95fdba7084 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:86:aa:af:1f:07:11} reservation:<nil>}
	I1207 23:34:40.644749  648820 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-357321d5a31d IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:56:f5:0f:21:e8:00} reservation:<nil>}
	I1207 23:34:40.645602  648820 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-79f54ad63e60 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:e6:15:11:16:e7:20} reservation:<nil>}
	I1207 23:34:40.646624  648820 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eae3a0}
	I1207 23:34:40.646652  648820 network_create.go:124] attempt to create docker network embed-certs-654118 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1207 23:34:40.646718  648820 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-654118 embed-certs-654118
	I1207 23:34:40.700738  648820 network_create.go:108] docker network embed-certs-654118 192.168.103.0/24 created
	I1207 23:34:40.700769  648820 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-654118" container
	I1207 23:34:40.700845  648820 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1207 23:34:40.719228  648820 cli_runner.go:164] Run: docker volume create embed-certs-654118 --label name.minikube.sigs.k8s.io=embed-certs-654118 --label created_by.minikube.sigs.k8s.io=true
	I1207 23:34:40.737811  648820 oci.go:103] Successfully created a docker volume embed-certs-654118
	I1207 23:34:40.737932  648820 cli_runner.go:164] Run: docker run --rm --name embed-certs-654118-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-654118 --entrypoint /usr/bin/test -v embed-certs-654118:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1207 23:34:41.129693  648820 oci.go:107] Successfully prepared a docker volume embed-certs-654118
	I1207 23:34:41.129792  648820 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:34:41.129809  648820 kic.go:194] Starting extracting preloaded images to volume ...
	I1207 23:34:41.129882  648820 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-654118:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1207 23:34:44.250263  648820 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-654118:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.12033866s)
	I1207 23:34:44.250312  648820 kic.go:203] duration metric: took 3.120496369s to extract preloaded images to volume ...
	W1207 23:34:44.250431  648820 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1207 23:34:44.250479  648820 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1207 23:34:44.250535  648820 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1207 23:34:44.314194  648820 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-654118 --name embed-certs-654118 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-654118 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-654118 --network embed-certs-654118 --ip 192.168.103.2 --volume embed-certs-654118:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1207 23:34:44.601964  648820 cli_runner.go:164] Run: docker container inspect embed-certs-654118 --format={{.State.Running}}
	I1207 23:34:44.623930  648820 cli_runner.go:164] Run: docker container inspect embed-certs-654118 --format={{.State.Status}}
	I1207 23:34:44.645483  648820 cli_runner.go:164] Run: docker exec embed-certs-654118 stat /var/lib/dpkg/alternatives/iptables
	I1207 23:34:44.694374  648820 oci.go:144] the created container "embed-certs-654118" has a running status.
	I1207 23:34:44.694425  648820 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/embed-certs-654118/id_rsa...
	I1207 23:34:45.057844  648820 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-389542/.minikube/machines/embed-certs-654118/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1207 23:34:45.087681  648820 cli_runner.go:164] Run: docker container inspect embed-certs-654118 --format={{.State.Status}}
	I1207 23:34:45.107480  648820 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1207 23:34:45.107498  648820 kic_runner.go:114] Args: [docker exec --privileged embed-certs-654118 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1207 23:34:45.157847  648820 cli_runner.go:164] Run: docker container inspect embed-certs-654118 --format={{.State.Status}}
	I1207 23:34:45.177855  648820 machine.go:94] provisionDockerMachine start ...
	I1207 23:34:45.177959  648820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-654118
	I1207 23:34:45.198865  648820 main.go:143] libmachine: Using SSH client type: native
	I1207 23:34:45.199170  648820 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1207 23:34:45.199187  648820 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:34:45.333533  648820 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-654118
	
	I1207 23:34:45.333582  648820 ubuntu.go:182] provisioning hostname "embed-certs-654118"
	I1207 23:34:45.333651  648820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-654118
	I1207 23:34:45.353845  648820 main.go:143] libmachine: Using SSH client type: native
	I1207 23:34:45.354155  648820 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1207 23:34:45.354179  648820 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-654118 && echo "embed-certs-654118" | sudo tee /etc/hostname
	I1207 23:34:44.087717  647748 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:34:44.087773  647748 machine.go:97] duration metric: took 5.892606794s to provisionDockerMachine
	I1207 23:34:44.087790  647748 start.go:293] postStartSetup for "old-k8s-version-320477" (driver="docker")
	I1207 23:34:44.087807  647748 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:34:44.087889  647748 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:34:44.087945  647748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-320477
	I1207 23:34:44.107206  647748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/old-k8s-version-320477/id_rsa Username:docker}
	I1207 23:34:44.203402  647748 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:34:44.207396  647748 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:34:44.207431  647748 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:34:44.207444  647748 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:34:44.207497  647748 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:34:44.207581  647748 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:34:44.207670  647748 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:34:44.216030  647748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:34:44.239228  647748 start.go:296] duration metric: took 151.416351ms for postStartSetup
	I1207 23:34:44.239358  647748 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:34:44.239411  647748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-320477
	I1207 23:34:44.258818  647748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/old-k8s-version-320477/id_rsa Username:docker}
	I1207 23:34:44.359110  647748 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:34:44.364125  647748 fix.go:56] duration metric: took 6.509224769s for fixHost
	I1207 23:34:44.364159  647748 start.go:83] releasing machines lock for "old-k8s-version-320477", held for 6.509287275s
	I1207 23:34:44.364236  647748 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-320477
	I1207 23:34:44.385949  647748 ssh_runner.go:195] Run: cat /version.json
	I1207 23:34:44.386015  647748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-320477
	I1207 23:34:44.386016  647748 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:34:44.386116  647748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-320477
	I1207 23:34:44.407791  647748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/old-k8s-version-320477/id_rsa Username:docker}
	I1207 23:34:44.408233  647748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/old-k8s-version-320477/id_rsa Username:docker}
	I1207 23:34:44.502962  647748 ssh_runner.go:195] Run: systemctl --version
	I1207 23:34:44.560997  647748 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:34:44.598002  647748 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:34:44.603465  647748 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:34:44.603543  647748 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:34:44.613303  647748 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 23:34:44.613407  647748 start.go:496] detecting cgroup driver to use...
	I1207 23:34:44.613446  647748 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:34:44.613490  647748 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:34:44.632707  647748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:34:44.648386  647748 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:34:44.648452  647748 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:34:44.666151  647748 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:34:44.679848  647748 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:34:44.792831  647748 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:34:44.895396  647748 docker.go:234] disabling docker service ...
	I1207 23:34:44.895606  647748 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:34:44.918119  647748 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:34:44.935074  647748 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:34:45.037715  647748 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:34:45.130949  647748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:34:45.145641  647748 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:34:45.162762  647748 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1207 23:34:45.162854  647748 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:34:45.173724  647748 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:34:45.173796  647748 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:34:45.185178  647748 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:34:45.196478  647748 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:34:45.206736  647748 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:34:45.215685  647748 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:34:45.225625  647748 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:34:45.235296  647748 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:34:45.246481  647748 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:34:45.255737  647748 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:34:45.264447  647748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:34:45.360498  647748 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:34:45.507763  647748 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:34:45.507846  647748 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:34:45.512311  647748 start.go:564] Will wait 60s for crictl version
	I1207 23:34:45.512385  647748 ssh_runner.go:195] Run: which crictl
	I1207 23:34:45.517011  647748 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:34:45.543077  647748 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:34:45.543161  647748 ssh_runner.go:195] Run: crio --version
	I1207 23:34:45.572641  647748 ssh_runner.go:195] Run: crio --version
	I1207 23:34:45.603307  647748 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1207 23:34:45.604462  647748 cli_runner.go:164] Run: docker network inspect old-k8s-version-320477 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:34:45.622544  647748 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1207 23:34:45.626867  647748 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:34:45.637610  647748 kubeadm.go:884] updating cluster {Name:old-k8s-version-320477 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-320477 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:34:45.637783  647748 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1207 23:34:45.637865  647748 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:34:45.674101  647748 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:34:45.674125  647748 crio.go:433] Images already preloaded, skipping extraction
	I1207 23:34:45.674177  647748 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:34:45.700015  647748 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:34:45.700038  647748 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:34:45.700046  647748 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 crio true true} ...
	I1207 23:34:45.700159  647748 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-320477 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-320477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:34:45.700222  647748 ssh_runner.go:195] Run: crio config
	I1207 23:34:45.747857  647748 cni.go:84] Creating CNI manager for ""
	I1207 23:34:45.747883  647748 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:34:45.747906  647748 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 23:34:45.747928  647748 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-320477 NodeName:old-k8s-version-320477 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:34:45.748061  647748 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-320477"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:34:45.748125  647748 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1207 23:34:45.758480  647748 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:34:45.758555  647748 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 23:34:45.766480  647748 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1207 23:34:45.779236  647748 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:34:45.791814  647748 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1207 23:34:45.804642  647748 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1207 23:34:45.808482  647748 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:34:45.819251  647748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:34:45.905789  647748 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:34:45.925711  647748 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/old-k8s-version-320477 for IP: 192.168.94.2
	I1207 23:34:45.925739  647748 certs.go:195] generating shared ca certs ...
	I1207 23:34:45.925762  647748 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:34:45.925918  647748 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:34:45.925958  647748 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:34:45.925969  647748 certs.go:257] generating profile certs ...
	I1207 23:34:45.926060  647748 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/old-k8s-version-320477/client.key
	I1207 23:34:45.926123  647748 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/old-k8s-version-320477/apiserver.key.9a12bf1f
	I1207 23:34:45.926160  647748 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/old-k8s-version-320477/proxy-client.key
	I1207 23:34:45.926272  647748 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:34:45.926304  647748 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:34:45.926313  647748 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:34:45.926362  647748 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:34:45.926395  647748 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:34:45.926421  647748 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:34:45.926463  647748 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:34:45.927080  647748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:34:45.948563  647748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:34:45.970383  647748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:34:45.992852  647748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:34:46.018818  647748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/old-k8s-version-320477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1207 23:34:46.044931  647748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/old-k8s-version-320477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 23:34:46.064354  647748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/old-k8s-version-320477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:34:46.085787  647748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/old-k8s-version-320477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 23:34:46.107320  647748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:34:46.127617  647748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:34:46.149750  647748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:34:46.169634  647748 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:34:46.185103  647748 ssh_runner.go:195] Run: openssl version
	I1207 23:34:46.192202  647748 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:34:46.201687  647748 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:34:46.210907  647748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:34:46.215683  647748 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:34:46.215747  647748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:34:46.255479  647748 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:34:46.263824  647748 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:34:46.271156  647748 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:34:46.278984  647748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:34:46.282565  647748 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:34:46.282616  647748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:34:46.318286  647748 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:34:46.326499  647748 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:34:46.334099  647748 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:34:46.342387  647748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:34:46.346982  647748 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:34:46.347043  647748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:34:46.392009  647748 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:34:46.400581  647748 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:34:46.405353  647748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 23:34:46.449002  647748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 23:34:46.503602  647748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 23:34:46.556636  647748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 23:34:46.617572  647748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 23:34:46.682809  647748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 23:34:46.720239  647748 kubeadm.go:401] StartCluster: {Name:old-k8s-version-320477 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-320477 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:34:46.720383  647748 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:34:46.720482  647748 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:34:46.753219  647748 cri.go:89] found id: "935941a2cb637af36928ffb8fe952a120096af31c3a4cf9940d0decdc9dd0ffb"
	I1207 23:34:46.753246  647748 cri.go:89] found id: "3699584e5acbb7ce5f69043c7f75a0d7f118a2286a1460827d4e7093b932ea8f"
	I1207 23:34:46.753252  647748 cri.go:89] found id: "a21fad74c0501472726aa964a8eae6cf6097ab2ad2cc7f048b4b2e442c8ec636"
	I1207 23:34:46.753256  647748 cri.go:89] found id: "9a8b8635416941bed89621f1e677d2a500361f4b4b1de6dac578300985bf3afc"
	I1207 23:34:46.753261  647748 cri.go:89] found id: ""
	I1207 23:34:46.753307  647748 ssh_runner.go:195] Run: sudo runc list -f json
	W1207 23:34:46.768215  647748 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:34:46Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:34:46.768283  647748 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:34:46.777148  647748 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1207 23:34:46.777204  647748 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1207 23:34:46.777258  647748 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 23:34:46.785501  647748 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:34:46.786367  647748 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-320477" does not appear in /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:34:46.786876  647748 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-389542/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-320477" cluster setting kubeconfig missing "old-k8s-version-320477" context setting]
	I1207 23:34:46.787660  647748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:34:46.789615  647748 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 23:34:46.798815  647748 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1207 23:34:46.798848  647748 kubeadm.go:602] duration metric: took 21.635525ms to restartPrimaryControlPlane
	I1207 23:34:46.798859  647748 kubeadm.go:403] duration metric: took 78.632434ms to StartCluster
	I1207 23:34:46.798879  647748 settings.go:142] acquiring lock: {Name:mk372e79badb9c8f25216fa891cff6dfa96ea2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:34:46.798954  647748 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:34:46.800088  647748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:34:46.800356  647748 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:34:46.800438  647748 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1207 23:34:46.800532  647748 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-320477"
	I1207 23:34:46.800557  647748 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-320477"
	W1207 23:34:46.800565  647748 addons.go:248] addon storage-provisioner should already be in state true
	I1207 23:34:46.800580  647748 config.go:182] Loaded profile config "old-k8s-version-320477": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1207 23:34:46.800586  647748 addons.go:70] Setting dashboard=true in profile "old-k8s-version-320477"
	I1207 23:34:46.800598  647748 host.go:66] Checking if "old-k8s-version-320477" exists ...
	I1207 23:34:46.800611  647748 addons.go:239] Setting addon dashboard=true in "old-k8s-version-320477"
	I1207 23:34:46.800606  647748 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-320477"
	W1207 23:34:46.800621  647748 addons.go:248] addon dashboard should already be in state true
	I1207 23:34:46.800638  647748 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-320477"
	I1207 23:34:46.800652  647748 host.go:66] Checking if "old-k8s-version-320477" exists ...
	I1207 23:34:46.800966  647748 cli_runner.go:164] Run: docker container inspect old-k8s-version-320477 --format={{.State.Status}}
	I1207 23:34:46.801034  647748 cli_runner.go:164] Run: docker container inspect old-k8s-version-320477 --format={{.State.Status}}
	I1207 23:34:46.801090  647748 cli_runner.go:164] Run: docker container inspect old-k8s-version-320477 --format={{.State.Status}}
	I1207 23:34:46.804009  647748 out.go:179] * Verifying Kubernetes components...
	I1207 23:34:46.805525  647748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:34:46.827360  647748 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-320477"
	W1207 23:34:46.827389  647748 addons.go:248] addon default-storageclass should already be in state true
	I1207 23:34:46.827420  647748 host.go:66] Checking if "old-k8s-version-320477" exists ...
	I1207 23:34:46.827895  647748 cli_runner.go:164] Run: docker container inspect old-k8s-version-320477 --format={{.State.Status}}
	I1207 23:34:46.829070  647748 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 23:34:46.830257  647748 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1207 23:34:46.830423  647748 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:34:46.830439  647748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 23:34:46.830508  647748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-320477
	I1207 23:34:46.833922  647748 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1207 23:34:46.834845  647748 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1207 23:34:46.834869  647748 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1207 23:34:46.834933  647748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-320477
	I1207 23:34:46.872252  647748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/old-k8s-version-320477/id_rsa Username:docker}
	I1207 23:34:46.873107  647748 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 23:34:46.873126  647748 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 23:34:46.873182  647748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-320477
	I1207 23:34:46.880415  647748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/old-k8s-version-320477/id_rsa Username:docker}
	I1207 23:34:46.902186  647748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/old-k8s-version-320477/id_rsa Username:docker}
	I1207 23:34:46.982395  647748 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:34:46.997994  647748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:34:47.002641  647748 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1207 23:34:47.002670  647748 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1207 23:34:47.002825  647748 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-320477" to be "Ready" ...
	I1207 23:34:47.013674  647748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 23:34:47.021554  647748 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1207 23:34:47.021581  647748 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1207 23:34:47.039952  647748 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1207 23:34:47.039992  647748 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1207 23:34:47.059481  647748 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1207 23:34:47.059512  647748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1207 23:34:47.079532  647748 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1207 23:34:47.079562  647748 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1207 23:34:47.103616  647748 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1207 23:34:47.103648  647748 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1207 23:34:47.130575  647748 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1207 23:34:47.130608  647748 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1207 23:34:47.145251  647748 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1207 23:34:47.145284  647748 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1207 23:34:47.159419  647748 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1207 23:34:47.159448  647748 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1207 23:34:47.174411  647748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1207 23:34:43.266260  610371 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (2.641370874s)
	W1207 23:34:43.266303  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:35094->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:35094->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1207 23:34:43.266315  610371 logs.go:123] Gathering logs for kube-apiserver [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96] ...
	I1207 23:34:43.266345  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:34:43.298911  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:34:43.298943  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:34:43.326856  610371 logs.go:123] Gathering logs for kube-controller-manager [0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d] ...
	I1207 23:34:43.326883  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d"
	I1207 23:34:43.358316  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:34:43.358376  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:34:43.410494  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:34:43.410534  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:34:45.942403  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:34:45.942917  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:34:45.942988  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:34:45.943052  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:34:45.975012  610371 cri.go:89] found id: "a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:34:45.975038  610371 cri.go:89] found id: ""
	I1207 23:34:45.975050  610371 logs.go:282] 1 containers: [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96]
	I1207 23:34:45.975110  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:45.979662  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:34:45.979751  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:34:46.012986  610371 cri.go:89] found id: ""
	I1207 23:34:46.013041  610371 logs.go:282] 0 containers: []
	W1207 23:34:46.013053  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:34:46.013067  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:34:46.013147  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:34:46.048895  610371 cri.go:89] found id: ""
	I1207 23:34:46.048924  610371 logs.go:282] 0 containers: []
	W1207 23:34:46.048934  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:34:46.048943  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:34:46.048995  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:34:46.079057  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:34:46.079112  610371 cri.go:89] found id: ""
	I1207 23:34:46.079124  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:34:46.079244  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:46.083425  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:34:46.083494  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:34:46.112810  610371 cri.go:89] found id: ""
	I1207 23:34:46.112840  610371 logs.go:282] 0 containers: []
	W1207 23:34:46.112851  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:34:46.112859  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:34:46.112919  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:34:46.144827  610371 cri.go:89] found id: "0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d"
	I1207 23:34:46.144856  610371 cri.go:89] found id: ""
	I1207 23:34:46.144867  610371 logs.go:282] 1 containers: [0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d]
	I1207 23:34:46.144929  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:34:46.149378  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:34:46.149467  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:34:46.177566  610371 cri.go:89] found id: ""
	I1207 23:34:46.177600  610371 logs.go:282] 0 containers: []
	W1207 23:34:46.177614  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:34:46.177623  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:34:46.177693  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:34:46.210986  610371 cri.go:89] found id: ""
	I1207 23:34:46.211014  610371 logs.go:282] 0 containers: []
	W1207 23:34:46.211030  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:34:46.211043  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:34:46.211057  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:34:46.244829  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:34:46.244855  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:34:46.338526  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:34:46.338557  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:34:46.372852  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:34:46.372889  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:34:46.438248  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:34:46.438295  610371 logs.go:123] Gathering logs for kube-apiserver [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96] ...
	I1207 23:34:46.438314  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:34:46.478293  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:34:46.478345  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:34:46.514116  610371 logs.go:123] Gathering logs for kube-controller-manager [0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d] ...
	I1207 23:34:46.514160  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0a2e7abfb7103cb4b84980f9141523ad0c86a6e26cee12dd610dff3ff7f53d5d"
	I1207 23:34:46.560462  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:34:46.560490  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:34:45.499037  648820 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-654118
	
	I1207 23:34:45.499121  648820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-654118
	I1207 23:34:45.519289  648820 main.go:143] libmachine: Using SSH client type: native
	I1207 23:34:45.519586  648820 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1207 23:34:45.519616  648820 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-654118' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-654118/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-654118' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:34:45.652120  648820 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:34:45.652155  648820 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:34:45.652205  648820 ubuntu.go:190] setting up certificates
	I1207 23:34:45.652220  648820 provision.go:84] configureAuth start
	I1207 23:34:45.652288  648820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-654118
	I1207 23:34:45.672987  648820 provision.go:143] copyHostCerts
	I1207 23:34:45.673054  648820 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:34:45.673067  648820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:34:45.673165  648820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:34:45.673290  648820 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:34:45.673302  648820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:34:45.673361  648820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:34:45.673440  648820 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:34:45.673451  648820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:34:45.673495  648820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:34:45.673582  648820 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.embed-certs-654118 san=[127.0.0.1 192.168.103.2 embed-certs-654118 localhost minikube]
	I1207 23:34:45.708657  648820 provision.go:177] copyRemoteCerts
	I1207 23:34:45.708726  648820 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:34:45.708774  648820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-654118
	I1207 23:34:45.728296  648820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/embed-certs-654118/id_rsa Username:docker}
	I1207 23:34:45.823991  648820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 23:34:45.843947  648820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:34:45.867048  648820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1207 23:34:45.884646  648820 provision.go:87] duration metric: took 232.4072ms to configureAuth
	I1207 23:34:45.884680  648820 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:34:45.884844  648820 config.go:182] Loaded profile config "embed-certs-654118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:34:45.884941  648820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-654118
	I1207 23:34:45.904130  648820 main.go:143] libmachine: Using SSH client type: native
	I1207 23:34:45.904411  648820 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1207 23:34:45.904430  648820 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:34:46.192859  648820 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:34:46.192887  648820 machine.go:97] duration metric: took 1.015006985s to provisionDockerMachine
	I1207 23:34:46.192900  648820 client.go:176] duration metric: took 5.609558288s to LocalClient.Create
	I1207 23:34:46.192922  648820 start.go:167] duration metric: took 5.609645392s to libmachine.API.Create "embed-certs-654118"
	I1207 23:34:46.192936  648820 start.go:293] postStartSetup for "embed-certs-654118" (driver="docker")
	I1207 23:34:46.192955  648820 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:34:46.193019  648820 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:34:46.193083  648820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-654118
	I1207 23:34:46.214903  648820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/embed-certs-654118/id_rsa Username:docker}
	I1207 23:34:46.312380  648820 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:34:46.316373  648820 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:34:46.316406  648820 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:34:46.316422  648820 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:34:46.316511  648820 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:34:46.316646  648820 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:34:46.316775  648820 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:34:46.326106  648820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:34:46.347478  648820 start.go:296] duration metric: took 154.509049ms for postStartSetup
	I1207 23:34:46.347842  648820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-654118
	I1207 23:34:46.368470  648820 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/config.json ...
	I1207 23:34:46.368817  648820 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:34:46.368876  648820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-654118
	I1207 23:34:46.388859  648820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/embed-certs-654118/id_rsa Username:docker}
	I1207 23:34:46.488491  648820 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:34:46.495550  648820 start.go:128] duration metric: took 5.914595692s to createHost
	I1207 23:34:46.495581  648820 start.go:83] releasing machines lock for "embed-certs-654118", held for 5.914898615s
	I1207 23:34:46.495654  648820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-654118
	I1207 23:34:46.518642  648820 ssh_runner.go:195] Run: cat /version.json
	I1207 23:34:46.518698  648820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-654118
	I1207 23:34:46.518891  648820 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:34:46.518999  648820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-654118
	I1207 23:34:46.545024  648820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/embed-certs-654118/id_rsa Username:docker}
	I1207 23:34:46.546347  648820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/embed-certs-654118/id_rsa Username:docker}
	I1207 23:34:46.745135  648820 ssh_runner.go:195] Run: systemctl --version
	I1207 23:34:46.752698  648820 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:34:46.796347  648820 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:34:46.802426  648820 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:34:46.802486  648820 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:34:46.841450  648820 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 23:34:46.841613  648820 start.go:496] detecting cgroup driver to use...
	I1207 23:34:46.841728  648820 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:34:46.841798  648820 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:34:46.874862  648820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:34:46.893386  648820 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:34:46.893457  648820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:34:46.920086  648820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:34:46.943824  648820 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:34:47.067239  648820 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:34:47.205690  648820 docker.go:234] disabling docker service ...
	I1207 23:34:47.205840  648820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:34:47.228672  648820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:34:47.244396  648820 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:34:47.360143  648820 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:34:47.484101  648820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:34:47.498461  648820 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:34:47.513710  648820 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:34:47.513782  648820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:34:47.527916  648820 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:34:47.527984  648820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:34:47.537277  648820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:34:47.546551  648820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:34:47.556574  648820 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:34:47.565385  648820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:34:47.574655  648820 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:34:47.589235  648820 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:34:47.598917  648820 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:34:47.606924  648820 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:34:47.615086  648820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:34:47.703915  648820 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:34:47.849390  648820 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:34:47.849479  648820 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:34:47.853770  648820 start.go:564] Will wait 60s for crictl version
	I1207 23:34:47.853827  648820 ssh_runner.go:195] Run: which crictl
	I1207 23:34:47.857623  648820 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:34:47.884176  648820 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:34:47.884269  648820 ssh_runner.go:195] Run: crio --version
	I1207 23:34:47.912397  648820 ssh_runner.go:195] Run: crio --version
	I1207 23:34:47.947170  648820 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1207 23:34:47.948508  648820 cli_runner.go:164] Run: docker network inspect embed-certs-654118 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:34:47.968942  648820 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1207 23:34:47.973523  648820 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:34:47.985743  648820 kubeadm.go:884] updating cluster {Name:embed-certs-654118 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-654118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:34:47.985882  648820 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:34:47.985933  648820 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:34:48.024749  648820 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:34:48.024778  648820 crio.go:433] Images already preloaded, skipping extraction
	I1207 23:34:48.024838  648820 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:34:48.054754  648820 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:34:48.054784  648820 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:34:48.054794  648820 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1207 23:34:48.054879  648820 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-654118 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-654118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:34:48.054943  648820 ssh_runner.go:195] Run: crio config
	I1207 23:34:48.110900  648820 cni.go:84] Creating CNI manager for ""
	I1207 23:34:48.110921  648820 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:34:48.110939  648820 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 23:34:48.110960  648820 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-654118 NodeName:embed-certs-654118 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:34:48.111107  648820 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-654118"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:34:48.111189  648820 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:34:48.120754  648820 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:34:48.120822  648820 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 23:34:48.129938  648820 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1207 23:34:48.143789  648820 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:34:48.161735  648820 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1207 23:34:48.176981  648820 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1207 23:34:48.181466  648820 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:34:48.192901  648820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:34:48.283923  648820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:34:48.310128  648820 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118 for IP: 192.168.103.2
	I1207 23:34:48.310152  648820 certs.go:195] generating shared ca certs ...
	I1207 23:34:48.310172  648820 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:34:48.310394  648820 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:34:48.310466  648820 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:34:48.310483  648820 certs.go:257] generating profile certs ...
	I1207 23:34:48.310573  648820 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/client.key
	I1207 23:34:48.310594  648820 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/client.crt with IP's: []
	I1207 23:34:48.351680  648820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/client.crt ...
	I1207 23:34:48.351708  648820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/client.crt: {Name:mk262954e176736c5af4a16ebe2f109c2292ff68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:34:48.351886  648820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/client.key ...
	I1207 23:34:48.351906  648820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/client.key: {Name:mk1e9886e594454bcd4d2fac0fe4ab75602d3e05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:34:48.352030  648820 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/apiserver.key.a2ab0279
	I1207 23:34:48.352052  648820 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/apiserver.crt.a2ab0279 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1207 23:34:48.450797  648820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/apiserver.crt.a2ab0279 ...
	I1207 23:34:48.450838  648820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/apiserver.crt.a2ab0279: {Name:mk7bcdfdff9468b0b6fa047d3999a1aef742b66e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:34:48.451045  648820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/apiserver.key.a2ab0279 ...
	I1207 23:34:48.451062  648820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/apiserver.key.a2ab0279: {Name:mke480ed1932a2ad401ee9309b731c7968ab422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:34:48.451162  648820 certs.go:382] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/apiserver.crt.a2ab0279 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/apiserver.crt
	I1207 23:34:48.451271  648820 certs.go:386] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/apiserver.key.a2ab0279 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/apiserver.key
	I1207 23:34:48.451383  648820 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/proxy-client.key
	I1207 23:34:48.451405  648820 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/proxy-client.crt with IP's: []
	I1207 23:34:48.480750  648820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/proxy-client.crt ...
	I1207 23:34:48.480782  648820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/proxy-client.crt: {Name:mkfaad6fbd3e82d25b8bc0c1f3101b491fe21864 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:34:48.480994  648820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/proxy-client.key ...
	I1207 23:34:48.481015  648820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/proxy-client.key: {Name:mk1627816cffab891470c752d42605ad43b67e3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:34:48.481278  648820 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:34:48.481355  648820 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:34:48.481374  648820 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:34:48.481412  648820 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:34:48.481448  648820 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:34:48.481489  648820 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:34:48.481552  648820 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:34:48.482194  648820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:34:48.502108  648820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:34:48.520483  648820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:34:48.538093  648820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:34:48.564045  648820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1207 23:34:48.581862  648820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 23:34:48.599979  648820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:34:48.617940  648820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 23:34:48.636842  648820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:34:48.666621  648820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:34:48.684521  648820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:34:48.702579  648820 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:34:48.720647  648820 ssh_runner.go:195] Run: openssl version
	I1207 23:34:48.728462  648820 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:34:48.736209  648820 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:34:48.744088  648820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:34:48.748077  648820 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:34:48.748137  648820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:34:48.782499  648820 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:34:48.790343  648820 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3931252.pem /etc/ssl/certs/3ec20f2e.0
	I1207 23:34:48.797893  648820 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:34:48.808127  648820 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:34:48.820556  648820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:34:48.826157  648820 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:34:48.826224  648820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:34:48.878765  648820 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:34:48.887097  648820 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1207 23:34:48.895261  648820 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:34:48.902999  648820 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:34:48.910563  648820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:34:48.914587  648820 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:34:48.914662  648820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:34:48.963844  648820 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:34:48.977677  648820 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/393125.pem /etc/ssl/certs/51391683.0
	I1207 23:34:48.990136  648820 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:34:49.002286  648820 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1207 23:34:49.002387  648820 kubeadm.go:401] StartCluster: {Name:embed-certs-654118 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-654118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:34:49.002467  648820 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:34:49.002519  648820 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:34:49.060988  648820 cri.go:89] found id: ""
	I1207 23:34:49.061074  648820 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:34:49.072302  648820 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 23:34:49.084828  648820 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1207 23:34:49.084897  648820 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 23:34:49.095552  648820 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 23:34:49.095577  648820 kubeadm.go:158] found existing configuration files:
	
	I1207 23:34:49.095648  648820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1207 23:34:49.104638  648820 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1207 23:34:49.104707  648820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1207 23:34:49.114250  648820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1207 23:34:49.123951  648820 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1207 23:34:49.124018  648820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1207 23:34:49.133213  648820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1207 23:34:49.143456  648820 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1207 23:34:49.143527  648820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1207 23:34:49.153033  648820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1207 23:34:49.164418  648820 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1207 23:34:49.164577  648820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1207 23:34:49.174110  648820 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1207 23:34:49.233592  648820 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1207 23:34:49.233703  648820 kubeadm.go:319] [preflight] Running pre-flight checks
	I1207 23:34:49.259469  648820 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1207 23:34:49.259554  648820 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1207 23:34:49.259604  648820 kubeadm.go:319] OS: Linux
	I1207 23:34:49.259674  648820 kubeadm.go:319] CGROUPS_CPU: enabled
	I1207 23:34:49.259739  648820 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1207 23:34:49.259782  648820 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1207 23:34:49.259828  648820 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1207 23:34:49.259869  648820 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1207 23:34:49.259913  648820 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1207 23:34:49.259954  648820 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1207 23:34:49.260000  648820 kubeadm.go:319] CGROUPS_IO: enabled
	I1207 23:34:49.328072  648820 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 23:34:49.328210  648820 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 23:34:49.328317  648820 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1207 23:34:49.337347  648820 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 23:34:48.953642  647748 node_ready.go:49] node "old-k8s-version-320477" is "Ready"
	I1207 23:34:48.953677  647748 node_ready.go:38] duration metric: took 1.950818024s for node "old-k8s-version-320477" to be "Ready" ...
	I1207 23:34:48.953694  647748 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:34:48.953751  647748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:34:49.864150  647748 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.866117414s)
	I1207 23:34:49.864221  647748 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.850521487s)
	I1207 23:34:50.231788  647748 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.05730827s)
	I1207 23:34:50.231812  647748 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.278039492s)
	I1207 23:34:50.231833  647748 api_server.go:72] duration metric: took 3.431442898s to wait for apiserver process to appear ...
	I1207 23:34:50.231841  647748 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:34:50.231862  647748 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1207 23:34:50.233364  647748 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-320477 addons enable metrics-server
	
	I1207 23:34:50.234830  647748 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1207 23:34:49.340403  648820 out.go:252]   - Generating certificates and keys ...
	I1207 23:34:49.340531  648820 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1207 23:34:49.340623  648820 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1207 23:34:49.721286  648820 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 23:34:49.870019  648820 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	
	
	==> CRI-O <==
	Dec 07 23:34:39 no-preload-313006 crio[769]: time="2025-12-07T23:34:39.185872384Z" level=info msg="Started container" PID=2878 containerID=5b9249826970bd0e1763cca15d42b73f558ac65fb972f9e389ca50ef7c8873fd description=kube-system/coredns-7d764666f9-btjrp/coredns id=7ec5fc68-ad4f-4690-a868-b2a81d3c2d01 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0813eda1c5620e582ab28fc03859a186539666aea5cf83d274e44efac1d1d22
	Dec 07 23:34:39 no-preload-313006 crio[769]: time="2025-12-07T23:34:39.185968768Z" level=info msg="Started container" PID=2877 containerID=05d6cec7a980155f3c0f3fca2e1afca74416a4b6e9269997cc029a6e1eebd4d2 description=kube-system/storage-provisioner/storage-provisioner id=e576bd8b-c368-4761-870b-74d6fdcf44ee name=/runtime.v1.RuntimeService/StartContainer sandboxID=a7ee4d219375e67a7d37359f771da3e11f24e10c68ac5b779d876732930c9197
	Dec 07 23:34:41 no-preload-313006 crio[769]: time="2025-12-07T23:34:41.757709812Z" level=info msg="Running pod sandbox: default/busybox/POD" id=8c2df342-df40-422b-9249-86292c79769f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 23:34:41 no-preload-313006 crio[769]: time="2025-12-07T23:34:41.757781362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:34:41 no-preload-313006 crio[769]: time="2025-12-07T23:34:41.762692507Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bcfa773413bea33a4386b73fec1803e21d9e29f794237baee809394a7ace7d47 UID:9f794bb8-ad22-47d0-a7a7-e5068ff54805 NetNS:/var/run/netns/fe5b8fa8-779f-4a95-ace6-b5380829d64a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005b4278}] Aliases:map[]}"
	Dec 07 23:34:41 no-preload-313006 crio[769]: time="2025-12-07T23:34:41.762723926Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 07 23:34:41 no-preload-313006 crio[769]: time="2025-12-07T23:34:41.773220582Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bcfa773413bea33a4386b73fec1803e21d9e29f794237baee809394a7ace7d47 UID:9f794bb8-ad22-47d0-a7a7-e5068ff54805 NetNS:/var/run/netns/fe5b8fa8-779f-4a95-ace6-b5380829d64a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005b4278}] Aliases:map[]}"
	Dec 07 23:34:41 no-preload-313006 crio[769]: time="2025-12-07T23:34:41.773405766Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 07 23:34:41 no-preload-313006 crio[769]: time="2025-12-07T23:34:41.77421944Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 07 23:34:41 no-preload-313006 crio[769]: time="2025-12-07T23:34:41.775429602Z" level=info msg="Ran pod sandbox bcfa773413bea33a4386b73fec1803e21d9e29f794237baee809394a7ace7d47 with infra container: default/busybox/POD" id=8c2df342-df40-422b-9249-86292c79769f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 23:34:41 no-preload-313006 crio[769]: time="2025-12-07T23:34:41.776836343Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7d3a59a4-ac85-480a-862b-f971dabcce39 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:34:41 no-preload-313006 crio[769]: time="2025-12-07T23:34:41.776979235Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=7d3a59a4-ac85-480a-862b-f971dabcce39 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:34:41 no-preload-313006 crio[769]: time="2025-12-07T23:34:41.77701648Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=7d3a59a4-ac85-480a-862b-f971dabcce39 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:34:41 no-preload-313006 crio[769]: time="2025-12-07T23:34:41.777852973Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f3addd86-051b-4f6c-af0e-658de237d60d name=/runtime.v1.ImageService/PullImage
	Dec 07 23:34:41 no-preload-313006 crio[769]: time="2025-12-07T23:34:41.7816194Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 07 23:34:44 no-preload-313006 crio[769]: time="2025-12-07T23:34:44.2580818Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f3addd86-051b-4f6c-af0e-658de237d60d name=/runtime.v1.ImageService/PullImage
	Dec 07 23:34:44 no-preload-313006 crio[769]: time="2025-12-07T23:34:44.258731651Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b00ca820-bc6f-4a1a-89af-be66e82003c2 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:34:44 no-preload-313006 crio[769]: time="2025-12-07T23:34:44.260544932Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0bd08295-cebf-43c9-ac28-439e801b9e13 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:34:44 no-preload-313006 crio[769]: time="2025-12-07T23:34:44.268059583Z" level=info msg="Creating container: default/busybox/busybox" id=89eca8ce-ef0e-445b-b74c-303936880538 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:34:44 no-preload-313006 crio[769]: time="2025-12-07T23:34:44.268474886Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:34:44 no-preload-313006 crio[769]: time="2025-12-07T23:34:44.27360789Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:34:44 no-preload-313006 crio[769]: time="2025-12-07T23:34:44.274195394Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:34:44 no-preload-313006 crio[769]: time="2025-12-07T23:34:44.317043223Z" level=info msg="Created container 70611d8af725b718448c910470732318dd6873eee584a5627cf39ce14551b625: default/busybox/busybox" id=89eca8ce-ef0e-445b-b74c-303936880538 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:34:44 no-preload-313006 crio[769]: time="2025-12-07T23:34:44.317761688Z" level=info msg="Starting container: 70611d8af725b718448c910470732318dd6873eee584a5627cf39ce14551b625" id=b29bf243-ea80-4806-a52b-14df6c617a7d name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:34:44 no-preload-313006 crio[769]: time="2025-12-07T23:34:44.319522915Z" level=info msg="Started container" PID=2951 containerID=70611d8af725b718448c910470732318dd6873eee584a5627cf39ce14551b625 description=default/busybox/busybox id=b29bf243-ea80-4806-a52b-14df6c617a7d name=/runtime.v1.RuntimeService/StartContainer sandboxID=bcfa773413bea33a4386b73fec1803e21d9e29f794237baee809394a7ace7d47
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	70611d8af725b       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   bcfa773413bea       busybox                                     default
	5b9249826970b       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      12 seconds ago      Running             coredns                   0                   c0813eda1c562       coredns-7d764666f9-btjrp                    kube-system
	05d6cec7a9801       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   a7ee4d219375e       storage-provisioner                         kube-system
	59a128907c646       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   b625ca07f04d6       kindnet-nzf5r                               kube-system
	648799e7587d5       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                      26 seconds ago      Running             kube-proxy                0                   3e47f7f18d00d       kube-proxy-xw4pf                            kube-system
	00092f1fa2781       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                      36 seconds ago      Running             kube-controller-manager   0                   6dc694d090bbe       kube-controller-manager-no-preload-313006   kube-system
	d66adfc6b32be       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                      36 seconds ago      Running             kube-scheduler            0                   e3bee21e487d5       kube-scheduler-no-preload-313006            kube-system
	3ceab8ba09d57       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      36 seconds ago      Running             etcd                      0                   dbed55f3bee4b       etcd-no-preload-313006                      kube-system
	a3ebf27a3d036       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                      36 seconds ago      Running             kube-apiserver            0                   d0c83ef29c092       kube-apiserver-no-preload-313006            kube-system
	
	
	==> coredns [5b9249826970bd0e1763cca15d42b73f558ac65fb972f9e389ca50ef7c8873fd] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:38711 - 28411 "HINFO IN 998226152457933300.5984439586792508373. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.020947497s
	
	
	==> describe nodes <==
	Name:               no-preload-313006
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-313006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=no-preload-313006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_34_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:34:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-313006
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:34:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:34:50 +0000   Sun, 07 Dec 2025 23:34:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:34:50 +0000   Sun, 07 Dec 2025 23:34:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:34:50 +0000   Sun, 07 Dec 2025 23:34:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:34:50 +0000   Sun, 07 Dec 2025 23:34:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-313006
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                1b1493a2-5c01-4861-a1e5-15f85715a778
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-btjrp                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-no-preload-313006                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-nzf5r                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-no-preload-313006             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-313006    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-xw4pf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-no-preload-313006             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node no-preload-313006 event: Registered Node no-preload-313006 in Controller
	
	
	==> dmesg <==
	[  +0.006319] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.495443] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006323] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494714] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006745] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494455] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007157] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493953] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007413] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493695] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007143] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493798] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007702] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493076] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008458] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493060] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008891] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492811] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007996] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493243] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008588] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492559] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008931] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.491699] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.010378] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	
	
	==> etcd [3ceab8ba09d57382d11231e23910c98107a7097e59eaea17cfc37ead06145a72] <==
	{"level":"warn","ts":"2025-12-07T23:34:15.837229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:15.851987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:15.865387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:15.871985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:15.878653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:15.886212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:15.893299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:15.899778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:15.907405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:15.915168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:15.929645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:15.936270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:15.943564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:15.951483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:16.007139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:17.432087Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.192017ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597574102534641 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/no-preload-313006.187f12fc7344e328\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/no-preload-313006.187f12fc7344e328\" value_size:554 lease:499225537247758831 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-07T23:34:17.432198Z","caller":"traceutil/trace.go:172","msg":"trace[738302580] transaction","detail":"{read_only:false; response_revision:68; number_of_response:1; }","duration":"193.210301ms","start":"2025-12-07T23:34:17.238965Z","end":"2025-12-07T23:34:17.432175Z","steps":["trace[738302580] 'process raft request'  (duration: 73.591031ms)","trace[738302580] 'compare'  (duration: 119.092937ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-07T23:34:17.700893Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"199.594384ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/edit\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-12-07T23:34:17.700958Z","caller":"traceutil/trace.go:172","msg":"trace[1925153071] range","detail":"{range_begin:/registry/clusterroles/edit; range_end:; response_count:0; response_revision:70; }","duration":"199.675898ms","start":"2025-12-07T23:34:17.501268Z","end":"2025-12-07T23:34:17.700944Z","steps":["trace[1925153071] 'agreement among raft nodes before linearized reading'  (duration: 59.60435ms)","trace[1925153071] 'range keys from in-memory index tree'  (duration: 139.955507ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-07T23:34:17.700953Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.008712ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597574102534652 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/priorityclasses/system-cluster-critical\" mod_revision:0 > success:<request_put:<key:\"/registry/priorityclasses/system-cluster-critical\" value_size:407 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-12-07T23:34:17.701083Z","caller":"traceutil/trace.go:172","msg":"trace[1929878084] transaction","detail":"{read_only:false; response_revision:72; number_of_response:1; }","duration":"200.364238ms","start":"2025-12-07T23:34:17.500709Z","end":"2025-12-07T23:34:17.701073Z","steps":["trace[1929878084] 'process raft request'  (duration: 200.316517ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-07T23:34:17.701145Z","caller":"traceutil/trace.go:172","msg":"trace[1931973897] transaction","detail":"{read_only:false; response_revision:71; number_of_response:1; }","duration":"200.802747ms","start":"2025-12-07T23:34:17.500313Z","end":"2025-12-07T23:34:17.701116Z","steps":["trace[1931973897] 'process raft request'  (duration: 60.590947ms)","trace[1931973897] 'compare'  (duration: 139.915196ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-07T23:34:43.568503Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"116.19326ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-07T23:34:43.568584Z","caller":"traceutil/trace.go:172","msg":"trace[1378721213] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:473; }","duration":"116.324944ms","start":"2025-12-07T23:34:43.452245Z","end":"2025-12-07T23:34:43.568570Z","steps":["trace[1378721213] 'range keys from in-memory index tree'  (duration: 116.094456ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-07T23:34:43.699448Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.556438ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597574102535640 > lease_revoke:<id:06ed9afb2a74cd3b>","response":"size:28"}
	
	
	==> kernel <==
	 23:34:52 up  2:17,  0 user,  load average: 2.44, 2.20, 1.79
	Linux no-preload-313006 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [59a128907c646d71e6467c28b9b307134568720f70944ffa47620806eb7a89f7] <==
	I1207 23:34:28.190371       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 23:34:28.190693       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1207 23:34:28.190890       1 main.go:148] setting mtu 1500 for CNI 
	I1207 23:34:28.190909       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 23:34:28.190935       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T23:34:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 23:34:28.460514       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 23:34:28.460556       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 23:34:28.460567       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 23:34:28.460742       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 23:34:28.860909       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 23:34:28.860932       1 metrics.go:72] Registering metrics
	I1207 23:34:28.860979       1 controller.go:711] "Syncing nftables rules"
	I1207 23:34:38.460420       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1207 23:34:38.460519       1 main.go:301] handling current node
	I1207 23:34:48.463447       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1207 23:34:48.463503       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a3ebf27a3d0364b80ecf85d0f455725518de3ed4173c3e730db805f658920078] <==
	I1207 23:34:16.484222       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1207 23:34:16.484230       1 cache.go:39] Caches are synced for autoregister controller
	I1207 23:34:16.487113       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1207 23:34:16.487226       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:34:16.491280       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1207 23:34:16.491340       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:34:16.668642       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 23:34:17.496902       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1207 23:34:17.701999       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1207 23:34:17.702019       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1207 23:34:18.359195       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 23:34:18.395556       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 23:34:18.481486       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1207 23:34:18.487570       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1207 23:34:18.488593       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 23:34:18.492647       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:34:19.396784       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 23:34:19.474316       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 23:34:19.485584       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1207 23:34:19.494072       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 23:34:25.049220       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:34:25.053135       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:34:25.349842       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 23:34:25.397748       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1207 23:34:50.548632       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:39910: use of closed network connection
	
	
	==> kube-controller-manager [00092f1fa27815743f92894dbab46a5460e2d133a0809acb158faac0ea6cb022] <==
	I1207 23:34:24.201206       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:34:24.201211       1 shared_informer.go:377] "Caches are synced"
	I1207 23:34:24.201298       1 shared_informer.go:377] "Caches are synced"
	I1207 23:34:24.201443       1 shared_informer.go:377] "Caches are synced"
	I1207 23:34:24.200842       1 shared_informer.go:377] "Caches are synced"
	I1207 23:34:24.200895       1 shared_informer.go:377] "Caches are synced"
	I1207 23:34:24.201567       1 shared_informer.go:377] "Caches are synced"
	I1207 23:34:24.201636       1 shared_informer.go:377] "Caches are synced"
	I1207 23:34:24.201691       1 shared_informer.go:377] "Caches are synced"
	I1207 23:34:24.201707       1 shared_informer.go:377] "Caches are synced"
	I1207 23:34:24.200705       1 shared_informer.go:377] "Caches are synced"
	I1207 23:34:24.201803       1 shared_informer.go:377] "Caches are synced"
	I1207 23:34:24.201909       1 shared_informer.go:377] "Caches are synced"
	I1207 23:34:24.201916       1 shared_informer.go:377] "Caches are synced"
	I1207 23:34:24.202083       1 shared_informer.go:377] "Caches are synced"
	I1207 23:34:24.201955       1 shared_informer.go:377] "Caches are synced"
	I1207 23:34:24.202423       1 shared_informer.go:377] "Caches are synced"
	I1207 23:34:24.211739       1 shared_informer.go:377] "Caches are synced"
	I1207 23:34:24.212974       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:34:24.215225       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-313006" podCIDRs=["10.244.0.0/24"]
	I1207 23:34:24.302787       1 shared_informer.go:377] "Caches are synced"
	I1207 23:34:24.302811       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1207 23:34:24.302819       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1207 23:34:24.313858       1 shared_informer.go:377] "Caches are synced"
	I1207 23:34:39.202282       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [648799e7587d5a8f4556a20d5cdca5ca1282c3df18da67fefbdbd9b0a5e4d9ec] <==
	I1207 23:34:25.830908       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:34:25.908652       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:34:26.009433       1 shared_informer.go:377] "Caches are synced"
	I1207 23:34:26.009477       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1207 23:34:26.009607       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:34:26.030114       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:34:26.030164       1 server_linux.go:136] "Using iptables Proxier"
	I1207 23:34:26.035428       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:34:26.035802       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1207 23:34:26.035818       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:34:26.036976       1 config.go:200] "Starting service config controller"
	I1207 23:34:26.036999       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:34:26.037004       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:34:26.037008       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:34:26.037054       1 config.go:309] "Starting node config controller"
	I1207 23:34:26.037060       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:34:26.036980       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:34:26.037072       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:34:26.137606       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:34:26.137642       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 23:34:26.137721       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 23:34:26.137727       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [d66adfc6b32bea1a80601c96fde987705cda128fe924b957e14bd1c56036a463] <==
	E1207 23:34:17.431084       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1207 23:34:17.431938       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1207 23:34:17.473167       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1207 23:34:17.474080       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1207 23:34:17.521702       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1207 23:34:17.522686       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1207 23:34:17.569089       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1207 23:34:17.570111       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1207 23:34:17.608488       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1207 23:34:17.609684       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1207 23:34:17.617175       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1207 23:34:17.618067       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1207 23:34:17.658575       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1207 23:34:17.659703       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1207 23:34:17.669092       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1207 23:34:17.670269       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1207 23:34:17.729538       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1207 23:34:17.730695       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1207 23:34:17.739167       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope"
	E1207 23:34:17.740142       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1207 23:34:17.744293       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1207 23:34:17.745404       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1207 23:34:17.802167       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1207 23:34:17.803251       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	I1207 23:34:20.319410       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 07 23:34:25 no-preload-313006 kubelet[2259]: I1207 23:34:25.473156    2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ebc0bfad-9d66-4e97-ba23-878bf95416a6-kube-proxy\") pod \"kube-proxy-xw4pf\" (UID: \"ebc0bfad-9d66-4e97-ba23-878bf95416a6\") " pod="kube-system/kube-proxy-xw4pf"
	Dec 07 23:34:25 no-preload-313006 kubelet[2259]: I1207 23:34:25.473216    2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ebc0bfad-9d66-4e97-ba23-878bf95416a6-xtables-lock\") pod \"kube-proxy-xw4pf\" (UID: \"ebc0bfad-9d66-4e97-ba23-878bf95416a6\") " pod="kube-system/kube-proxy-xw4pf"
	Dec 07 23:34:25 no-preload-313006 kubelet[2259]: I1207 23:34:25.473266    2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ebc0bfad-9d66-4e97-ba23-878bf95416a6-lib-modules\") pod \"kube-proxy-xw4pf\" (UID: \"ebc0bfad-9d66-4e97-ba23-878bf95416a6\") " pod="kube-system/kube-proxy-xw4pf"
	Dec 07 23:34:25 no-preload-313006 kubelet[2259]: I1207 23:34:25.473317    2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvpgj\" (UniqueName: \"kubernetes.io/projected/8d7ee556-9db1-49ce-a52b-403f54085f1f-kube-api-access-mvpgj\") pod \"kindnet-nzf5r\" (UID: \"8d7ee556-9db1-49ce-a52b-403f54085f1f\") " pod="kube-system/kindnet-nzf5r"
	Dec 07 23:34:25 no-preload-313006 kubelet[2259]: E1207 23:34:25.964823    2259 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-313006" containerName="kube-controller-manager"
	Dec 07 23:34:26 no-preload-313006 kubelet[2259]: I1207 23:34:26.391848    2259 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-xw4pf" podStartSLOduration=1.391828037 podStartE2EDuration="1.391828037s" podCreationTimestamp="2025-12-07 23:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:34:26.391651105 +0000 UTC m=+7.146845296" watchObservedRunningTime="2025-12-07 23:34:26.391828037 +0000 UTC m=+7.147022229"
	Dec 07 23:34:27 no-preload-313006 kubelet[2259]: E1207 23:34:27.300375    2259 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-313006" containerName="kube-scheduler"
	Dec 07 23:34:28 no-preload-313006 kubelet[2259]: I1207 23:34:28.399675    2259 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-nzf5r" podStartSLOduration=1.166854535 podStartE2EDuration="3.399654756s" podCreationTimestamp="2025-12-07 23:34:25 +0000 UTC" firstStartedPulling="2025-12-07 23:34:25.729469421 +0000 UTC m=+6.484663607" lastFinishedPulling="2025-12-07 23:34:27.962269656 +0000 UTC m=+8.717463828" observedRunningTime="2025-12-07 23:34:28.399436487 +0000 UTC m=+9.154630680" watchObservedRunningTime="2025-12-07 23:34:28.399654756 +0000 UTC m=+9.154848947"
	Dec 07 23:34:30 no-preload-313006 kubelet[2259]: E1207 23:34:30.114045    2259 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-313006" containerName="etcd"
	Dec 07 23:34:30 no-preload-313006 kubelet[2259]: E1207 23:34:30.394141    2259 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-313006" containerName="etcd"
	Dec 07 23:34:34 no-preload-313006 kubelet[2259]: E1207 23:34:34.976922    2259 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-313006" containerName="kube-apiserver"
	Dec 07 23:34:35 no-preload-313006 kubelet[2259]: E1207 23:34:35.970038    2259 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-313006" containerName="kube-controller-manager"
	Dec 07 23:34:37 no-preload-313006 kubelet[2259]: E1207 23:34:37.305859    2259 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-313006" containerName="kube-scheduler"
	Dec 07 23:34:38 no-preload-313006 kubelet[2259]: I1207 23:34:38.803066    2259 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 07 23:34:38 no-preload-313006 kubelet[2259]: I1207 23:34:38.871439    2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c81bd338-0a5e-4937-8442-bbacd5f685c2-config-volume\") pod \"coredns-7d764666f9-btjrp\" (UID: \"c81bd338-0a5e-4937-8442-bbacd5f685c2\") " pod="kube-system/coredns-7d764666f9-btjrp"
	Dec 07 23:34:38 no-preload-313006 kubelet[2259]: I1207 23:34:38.871480    2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9c75fba7-bec3-421e-9f99-b51827afb29d-tmp\") pod \"storage-provisioner\" (UID: \"9c75fba7-bec3-421e-9f99-b51827afb29d\") " pod="kube-system/storage-provisioner"
	Dec 07 23:34:38 no-preload-313006 kubelet[2259]: I1207 23:34:38.871498    2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsth5\" (UniqueName: \"kubernetes.io/projected/9c75fba7-bec3-421e-9f99-b51827afb29d-kube-api-access-lsth5\") pod \"storage-provisioner\" (UID: \"9c75fba7-bec3-421e-9f99-b51827afb29d\") " pod="kube-system/storage-provisioner"
	Dec 07 23:34:38 no-preload-313006 kubelet[2259]: I1207 23:34:38.871515    2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkq6c\" (UniqueName: \"kubernetes.io/projected/c81bd338-0a5e-4937-8442-bbacd5f685c2-kube-api-access-vkq6c\") pod \"coredns-7d764666f9-btjrp\" (UID: \"c81bd338-0a5e-4937-8442-bbacd5f685c2\") " pod="kube-system/coredns-7d764666f9-btjrp"
	Dec 07 23:34:39 no-preload-313006 kubelet[2259]: E1207 23:34:39.418031    2259 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-btjrp" containerName="coredns"
	Dec 07 23:34:39 no-preload-313006 kubelet[2259]: I1207 23:34:39.427828    2259 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.427805812 podStartE2EDuration="14.427805812s" podCreationTimestamp="2025-12-07 23:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:34:39.427544295 +0000 UTC m=+20.182738494" watchObservedRunningTime="2025-12-07 23:34:39.427805812 +0000 UTC m=+20.183000003"
	Dec 07 23:34:39 no-preload-313006 kubelet[2259]: I1207 23:34:39.438350    2259 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-btjrp" podStartSLOduration=14.438312361 podStartE2EDuration="14.438312361s" podCreationTimestamp="2025-12-07 23:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:34:39.438224216 +0000 UTC m=+20.193418409" watchObservedRunningTime="2025-12-07 23:34:39.438312361 +0000 UTC m=+20.193506552"
	Dec 07 23:34:40 no-preload-313006 kubelet[2259]: E1207 23:34:40.420415    2259 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-btjrp" containerName="coredns"
	Dec 07 23:34:41 no-preload-313006 kubelet[2259]: E1207 23:34:41.425147    2259 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-btjrp" containerName="coredns"
	Dec 07 23:34:41 no-preload-313006 kubelet[2259]: I1207 23:34:41.489241    2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hq2k\" (UniqueName: \"kubernetes.io/projected/9f794bb8-ad22-47d0-a7a7-e5068ff54805-kube-api-access-6hq2k\") pod \"busybox\" (UID: \"9f794bb8-ad22-47d0-a7a7-e5068ff54805\") " pod="default/busybox"
	Dec 07 23:34:44 no-preload-313006 kubelet[2259]: I1207 23:34:44.448668    2259 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.96644655 podStartE2EDuration="3.448625965s" podCreationTimestamp="2025-12-07 23:34:41 +0000 UTC" firstStartedPulling="2025-12-07 23:34:41.777478391 +0000 UTC m=+22.532672563" lastFinishedPulling="2025-12-07 23:34:44.259657806 +0000 UTC m=+25.014851978" observedRunningTime="2025-12-07 23:34:44.447493283 +0000 UTC m=+25.202687475" watchObservedRunningTime="2025-12-07 23:34:44.448625965 +0000 UTC m=+25.203820156"
	
	
	==> storage-provisioner [05d6cec7a980155f3c0f3fca2e1afca74416a4b6e9269997cc029a6e1eebd4d2] <==
	I1207 23:34:39.200506       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 23:34:39.209683       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 23:34:39.209741       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1207 23:34:39.212192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:34:39.219247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 23:34:39.219517       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 23:34:39.219582       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"27117f0f-4148-42d8-a5da-bf1f690374b0", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-313006_ed522039-a365-47f7-aac1-8bd6ee0fa110 became leader
	I1207 23:34:39.219691       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-313006_ed522039-a365-47f7-aac1-8bd6ee0fa110!
	W1207 23:34:39.223870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:34:39.227431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 23:34:39.320106       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-313006_ed522039-a365-47f7-aac1-8bd6ee0fa110!
	W1207 23:34:41.231284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:34:41.238150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:34:43.241291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:34:43.314042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:34:45.317521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:34:45.323029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:34:47.326779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:34:47.330978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:34:49.335799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:34:49.341831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:34:51.345864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:34:51.349745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-313006 -n no-preload-313006
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-313006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-320477 --alsologtostderr -v=1
E1207 23:35:38.534392  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-320477 --alsologtostderr -v=1: exit status 80 (2.418968803s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-320477 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:35:38.023560  660349 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:35:38.023703  660349 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:35:38.023714  660349 out.go:374] Setting ErrFile to fd 2...
	I1207 23:35:38.023721  660349 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:35:38.023975  660349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:35:38.024244  660349 out.go:368] Setting JSON to false
	I1207 23:35:38.024268  660349 mustload.go:66] Loading cluster: old-k8s-version-320477
	I1207 23:35:38.024678  660349 config.go:182] Loaded profile config "old-k8s-version-320477": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1207 23:35:38.025084  660349 cli_runner.go:164] Run: docker container inspect old-k8s-version-320477 --format={{.State.Status}}
	I1207 23:35:38.043749  660349 host.go:66] Checking if "old-k8s-version-320477" exists ...
	I1207 23:35:38.044024  660349 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:35:38.102809  660349 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-07 23:35:38.092650555 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:35:38.103470  660349 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-320477 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1207 23:35:38.105448  660349 out.go:179] * Pausing node old-k8s-version-320477 ... 
	I1207 23:35:38.106708  660349 host.go:66] Checking if "old-k8s-version-320477" exists ...
	I1207 23:35:38.107028  660349 ssh_runner.go:195] Run: systemctl --version
	I1207 23:35:38.107079  660349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-320477
	I1207 23:35:38.126316  660349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/old-k8s-version-320477/id_rsa Username:docker}
	I1207 23:35:38.221438  660349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:35:38.234716  660349 pause.go:52] kubelet running: true
	I1207 23:35:38.234792  660349 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1207 23:35:38.407109  660349 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1207 23:35:38.407220  660349 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1207 23:35:38.476343  660349 cri.go:89] found id: "4b439bad9ad85b6dcd7bc9ce303a25519ec7b97359492cd12f2b5f913bfe91d6"
	I1207 23:35:38.476367  660349 cri.go:89] found id: "e5802a25760f8ce1babbff8e5ab0d37753e4c8f06edd2c4595f17533c8d75cb8"
	I1207 23:35:38.476371  660349 cri.go:89] found id: "3a169be3b943116304e4ac0add496f779a883bd6c9970be5183cbf2572dd3b72"
	I1207 23:35:38.476375  660349 cri.go:89] found id: "48fc3f42e00b15030c847b6ceb34f41299df9ffdebfb2d4eff9f587834a6f337"
	I1207 23:35:38.476378  660349 cri.go:89] found id: "7ac02f5275ac14463e5fd58a2169b7fdf2d51dd9e8b7dc1f1fab2b5d1e42f235"
	I1207 23:35:38.476381  660349 cri.go:89] found id: "935941a2cb637af36928ffb8fe952a120096af31c3a4cf9940d0decdc9dd0ffb"
	I1207 23:35:38.476384  660349 cri.go:89] found id: "3699584e5acbb7ce5f69043c7f75a0d7f118a2286a1460827d4e7093b932ea8f"
	I1207 23:35:38.476387  660349 cri.go:89] found id: "a21fad74c0501472726aa964a8eae6cf6097ab2ad2cc7f048b4b2e442c8ec636"
	I1207 23:35:38.476390  660349 cri.go:89] found id: "9a8b8635416941bed89621f1e677d2a500361f4b4b1de6dac578300985bf3afc"
	I1207 23:35:38.476395  660349 cri.go:89] found id: "8b580c253981d8b8c79bb5abf64e0fc2d20cb1697c918a63e8051b60454e5e75"
	I1207 23:35:38.476398  660349 cri.go:89] found id: "ce7324d8aac62ae7c0aa0221635e72e96bfcd16abd09a61ad8cef4c7e66ca07f"
	I1207 23:35:38.476401  660349 cri.go:89] found id: ""
	I1207 23:35:38.476439  660349 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 23:35:38.488810  660349 retry.go:31] will retry after 132.662518ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:35:38Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:35:38.622250  660349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:35:38.635672  660349 pause.go:52] kubelet running: false
	I1207 23:35:38.635754  660349 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1207 23:35:38.774972  660349 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1207 23:35:38.775053  660349 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1207 23:35:38.847072  660349 cri.go:89] found id: "4b439bad9ad85b6dcd7bc9ce303a25519ec7b97359492cd12f2b5f913bfe91d6"
	I1207 23:35:38.847094  660349 cri.go:89] found id: "e5802a25760f8ce1babbff8e5ab0d37753e4c8f06edd2c4595f17533c8d75cb8"
	I1207 23:35:38.847098  660349 cri.go:89] found id: "3a169be3b943116304e4ac0add496f779a883bd6c9970be5183cbf2572dd3b72"
	I1207 23:35:38.847101  660349 cri.go:89] found id: "48fc3f42e00b15030c847b6ceb34f41299df9ffdebfb2d4eff9f587834a6f337"
	I1207 23:35:38.847104  660349 cri.go:89] found id: "7ac02f5275ac14463e5fd58a2169b7fdf2d51dd9e8b7dc1f1fab2b5d1e42f235"
	I1207 23:35:38.847108  660349 cri.go:89] found id: "935941a2cb637af36928ffb8fe952a120096af31c3a4cf9940d0decdc9dd0ffb"
	I1207 23:35:38.847110  660349 cri.go:89] found id: "3699584e5acbb7ce5f69043c7f75a0d7f118a2286a1460827d4e7093b932ea8f"
	I1207 23:35:38.847119  660349 cri.go:89] found id: "a21fad74c0501472726aa964a8eae6cf6097ab2ad2cc7f048b4b2e442c8ec636"
	I1207 23:35:38.847122  660349 cri.go:89] found id: "9a8b8635416941bed89621f1e677d2a500361f4b4b1de6dac578300985bf3afc"
	I1207 23:35:38.847128  660349 cri.go:89] found id: "8b580c253981d8b8c79bb5abf64e0fc2d20cb1697c918a63e8051b60454e5e75"
	I1207 23:35:38.847131  660349 cri.go:89] found id: "ce7324d8aac62ae7c0aa0221635e72e96bfcd16abd09a61ad8cef4c7e66ca07f"
	I1207 23:35:38.847134  660349 cri.go:89] found id: ""
	I1207 23:35:38.847170  660349 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 23:35:38.859610  660349 retry.go:31] will retry after 493.10353ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:35:38Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:35:39.353262  660349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:35:39.367002  660349 pause.go:52] kubelet running: false
	I1207 23:35:39.367056  660349 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1207 23:35:39.510887  660349 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1207 23:35:39.510996  660349 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1207 23:35:39.583046  660349 cri.go:89] found id: "4b439bad9ad85b6dcd7bc9ce303a25519ec7b97359492cd12f2b5f913bfe91d6"
	I1207 23:35:39.583066  660349 cri.go:89] found id: "e5802a25760f8ce1babbff8e5ab0d37753e4c8f06edd2c4595f17533c8d75cb8"
	I1207 23:35:39.583070  660349 cri.go:89] found id: "3a169be3b943116304e4ac0add496f779a883bd6c9970be5183cbf2572dd3b72"
	I1207 23:35:39.583074  660349 cri.go:89] found id: "48fc3f42e00b15030c847b6ceb34f41299df9ffdebfb2d4eff9f587834a6f337"
	I1207 23:35:39.583077  660349 cri.go:89] found id: "7ac02f5275ac14463e5fd58a2169b7fdf2d51dd9e8b7dc1f1fab2b5d1e42f235"
	I1207 23:35:39.583080  660349 cri.go:89] found id: "935941a2cb637af36928ffb8fe952a120096af31c3a4cf9940d0decdc9dd0ffb"
	I1207 23:35:39.583083  660349 cri.go:89] found id: "3699584e5acbb7ce5f69043c7f75a0d7f118a2286a1460827d4e7093b932ea8f"
	I1207 23:35:39.583085  660349 cri.go:89] found id: "a21fad74c0501472726aa964a8eae6cf6097ab2ad2cc7f048b4b2e442c8ec636"
	I1207 23:35:39.583088  660349 cri.go:89] found id: "9a8b8635416941bed89621f1e677d2a500361f4b4b1de6dac578300985bf3afc"
	I1207 23:35:39.583094  660349 cri.go:89] found id: "8b580c253981d8b8c79bb5abf64e0fc2d20cb1697c918a63e8051b60454e5e75"
	I1207 23:35:39.583096  660349 cri.go:89] found id: "ce7324d8aac62ae7c0aa0221635e72e96bfcd16abd09a61ad8cef4c7e66ca07f"
	I1207 23:35:39.583099  660349 cri.go:89] found id: ""
	I1207 23:35:39.583141  660349 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 23:35:39.595835  660349 retry.go:31] will retry after 511.143098ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:35:39Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:35:40.107493  660349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:35:40.122958  660349 pause.go:52] kubelet running: false
	I1207 23:35:40.123011  660349 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1207 23:35:40.275944  660349 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1207 23:35:40.276029  660349 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1207 23:35:40.351781  660349 cri.go:89] found id: "4b439bad9ad85b6dcd7bc9ce303a25519ec7b97359492cd12f2b5f913bfe91d6"
	I1207 23:35:40.351813  660349 cri.go:89] found id: "e5802a25760f8ce1babbff8e5ab0d37753e4c8f06edd2c4595f17533c8d75cb8"
	I1207 23:35:40.351819  660349 cri.go:89] found id: "3a169be3b943116304e4ac0add496f779a883bd6c9970be5183cbf2572dd3b72"
	I1207 23:35:40.351823  660349 cri.go:89] found id: "48fc3f42e00b15030c847b6ceb34f41299df9ffdebfb2d4eff9f587834a6f337"
	I1207 23:35:40.351828  660349 cri.go:89] found id: "7ac02f5275ac14463e5fd58a2169b7fdf2d51dd9e8b7dc1f1fab2b5d1e42f235"
	I1207 23:35:40.351832  660349 cri.go:89] found id: "935941a2cb637af36928ffb8fe952a120096af31c3a4cf9940d0decdc9dd0ffb"
	I1207 23:35:40.351834  660349 cri.go:89] found id: "3699584e5acbb7ce5f69043c7f75a0d7f118a2286a1460827d4e7093b932ea8f"
	I1207 23:35:40.351837  660349 cri.go:89] found id: "a21fad74c0501472726aa964a8eae6cf6097ab2ad2cc7f048b4b2e442c8ec636"
	I1207 23:35:40.351839  660349 cri.go:89] found id: "9a8b8635416941bed89621f1e677d2a500361f4b4b1de6dac578300985bf3afc"
	I1207 23:35:40.351852  660349 cri.go:89] found id: "8b580c253981d8b8c79bb5abf64e0fc2d20cb1697c918a63e8051b60454e5e75"
	I1207 23:35:40.351855  660349 cri.go:89] found id: "ce7324d8aac62ae7c0aa0221635e72e96bfcd16abd09a61ad8cef4c7e66ca07f"
	I1207 23:35:40.351857  660349 cri.go:89] found id: ""
	I1207 23:35:40.351895  660349 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 23:35:40.369308  660349 out.go:203] 
	W1207 23:35:40.370738  660349 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:35:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:35:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1207 23:35:40.370762  660349 out.go:285] * 
	* 
	W1207 23:35:40.375530  660349 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 23:35:40.376885  660349 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-320477 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-320477
helpers_test.go:243: (dbg) docker inspect old-k8s-version-320477:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "06913e870114853a6134a49eb080ad75cbade550da3920f3ac120370ad522f60",
	        "Created": "2025-12-07T23:33:24.406627697Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 648013,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:34:37.904362181Z",
	            "FinishedAt": "2025-12-07T23:34:36.902588342Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/06913e870114853a6134a49eb080ad75cbade550da3920f3ac120370ad522f60/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/06913e870114853a6134a49eb080ad75cbade550da3920f3ac120370ad522f60/hostname",
	        "HostsPath": "/var/lib/docker/containers/06913e870114853a6134a49eb080ad75cbade550da3920f3ac120370ad522f60/hosts",
	        "LogPath": "/var/lib/docker/containers/06913e870114853a6134a49eb080ad75cbade550da3920f3ac120370ad522f60/06913e870114853a6134a49eb080ad75cbade550da3920f3ac120370ad522f60-json.log",
	        "Name": "/old-k8s-version-320477",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-320477:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-320477",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "06913e870114853a6134a49eb080ad75cbade550da3920f3ac120370ad522f60",
	                "LowerDir": "/var/lib/docker/overlay2/acd9d1d66636fbbdfd34477ab909bc56ba8678951aa24f32a68daf160b304ed3-init/diff:/var/lib/docker/overlay2/d2e9c5481c0f5ed3745e4b3c85b207e8e3f273f5a1d285f7bc7bfa20976ad16e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/acd9d1d66636fbbdfd34477ab909bc56ba8678951aa24f32a68daf160b304ed3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/acd9d1d66636fbbdfd34477ab909bc56ba8678951aa24f32a68daf160b304ed3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/acd9d1d66636fbbdfd34477ab909bc56ba8678951aa24f32a68daf160b304ed3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-320477",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-320477/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-320477",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-320477",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-320477",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9becde4ef7b99a441a965bc7e1f782c121ec76992b206c54733d22ae271b06e3",
	            "SandboxKey": "/var/run/docker/netns/9becde4ef7b9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-320477": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "79f54ad63e607736183a174ecfbd71671c6240b2d3072bbde0376d130c69013c",
	                    "EndpointID": "90fbe59cab7277486e368ac06742dccfdba4f352e228d2db974734f5d862382a",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "f2:d1:8f:66:58:4f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-320477",
	                        "06913e870114"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-320477 -n old-k8s-version-320477
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-320477 -n old-k8s-version-320477: exit status 2 (360.753092ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-320477 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-320477 logs -n 25: (1.221120498s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-600852 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo containerd config dump                                                                                                                                                                                                  │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo crio config                                                                                                                                                                                                             │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ delete  │ -p cilium-600852                                                                                                                                                                                                                              │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:33 UTC │
	│ start   │ -p old-k8s-version-320477 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-320477 │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:34 UTC │
	│ start   │ -p cert-expiration-612608 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-612608 │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:33 UTC │
	│ delete  │ -p cert-expiration-612608                                                                                                                                                                                                                     │ cert-expiration-612608 │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:33 UTC │
	│ start   │ -p no-preload-313006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-313006      │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:34 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-320477 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-320477 │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │                     │
	│ stop    │ -p old-k8s-version-320477 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-320477 │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:34 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-320477 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-320477 │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:34 UTC │
	│ start   │ -p old-k8s-version-320477 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-320477 │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:35 UTC │
	│ delete  │ -p stopped-upgrade-604160                                                                                                                                                                                                                     │ stopped-upgrade-604160 │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:34 UTC │
	│ start   │ -p embed-certs-654118 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-654118     │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-313006 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-313006      │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │                     │
	│ stop    │ -p no-preload-313006 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-313006      │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:35 UTC │
	│ addons  │ enable dashboard -p no-preload-313006 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-313006      │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p no-preload-313006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-313006      │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ image   │ old-k8s-version-320477 image list --format=json                                                                                                                                                                                               │ old-k8s-version-320477 │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ pause   │ -p old-k8s-version-320477 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-320477 │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:35:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:35:11.948416  656318 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:35:11.948543  656318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:35:11.948555  656318 out.go:374] Setting ErrFile to fd 2...
	I1207 23:35:11.948562  656318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:35:11.948862  656318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:35:11.949446  656318 out.go:368] Setting JSON to false
	I1207 23:35:11.951084  656318 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8256,"bootTime":1765142256,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:35:11.951163  656318 start.go:143] virtualization: kvm guest
	I1207 23:35:11.953338  656318 out.go:179] * [no-preload-313006] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:35:11.954572  656318 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:35:11.954581  656318 notify.go:221] Checking for updates...
	I1207 23:35:11.956967  656318 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:35:11.958450  656318 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:35:11.959838  656318 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:35:11.961173  656318 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:35:11.962510  656318 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:35:11.964222  656318 config.go:182] Loaded profile config "no-preload-313006": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:35:11.965018  656318 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:35:11.990062  656318 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:35:11.990190  656318 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:35:12.053881  656318 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-07 23:35:12.043233543 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:35:12.054041  656318 docker.go:319] overlay module found
	I1207 23:35:12.058529  656318 out.go:179] * Using the docker driver based on existing profile
	I1207 23:35:12.060005  656318 start.go:309] selected driver: docker
	I1207 23:35:12.060027  656318 start.go:927] validating driver "docker" against &{Name:no-preload-313006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-313006 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:35:12.060153  656318 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:35:12.060829  656318 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:35:12.120195  656318 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-07 23:35:12.110157918 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:35:12.120546  656318 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:35:12.120583  656318 cni.go:84] Creating CNI manager for ""
	I1207 23:35:12.120656  656318 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:35:12.120720  656318 start.go:353] cluster config:
	{Name:no-preload-313006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-313006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:35:12.123832  656318 out.go:179] * Starting "no-preload-313006" primary control-plane node in "no-preload-313006" cluster
	I1207 23:35:12.125168  656318 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:35:12.126482  656318 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:35:12.128060  656318 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1207 23:35:12.128163  656318 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/config.json ...
	I1207 23:35:12.128184  656318 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:35:12.128409  656318 cache.go:107] acquiring lock: {Name:mk35f35d02b51e73648018346caa8577bcb02423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:35:12.128478  656318 cache.go:107] acquiring lock: {Name:mk6e7f82161fd3b4764748eae2defc53fa3a2d89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:35:12.128505  656318 cache.go:107] acquiring lock: {Name:mkc02ccbaf1950fb11a48894c61699039caba7ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:35:12.128557  656318 cache.go:115] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1207 23:35:12.128419  656318 cache.go:107] acquiring lock: {Name:mk9827fb3e41345bba396b2d0abebc9c76ae1b5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:35:12.128572  656318 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 178.599µs
	I1207 23:35:12.128593  656318 cache.go:115] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1207 23:35:12.128599  656318 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1207 23:35:12.128556  656318 cache.go:107] acquiring lock: {Name:mk073566b0fe2be152587ae35afb0e7b5e91cd92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:35:12.128607  656318 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 147.567µs
	I1207 23:35:12.128625  656318 cache.go:115] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1207 23:35:12.128612  656318 cache.go:107] acquiring lock: {Name:mke7b5e65769096d2da605e337724f9c23cd0a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:35:12.128625  656318 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1207 23:35:12.128594  656318 cache.go:115] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1207 23:35:12.128634  656318 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 227.627µs
	I1207 23:35:12.128645  656318 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1207 23:35:12.128618  656318 cache.go:107] acquiring lock: {Name:mkbd6b49f7665e4f1e59327a6638af64accfbd8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:35:12.128647  656318 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 151.513µs
	I1207 23:35:12.128656  656318 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1207 23:35:12.128674  656318 cache.go:115] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1207 23:35:12.128675  656318 cache.go:115] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1207 23:35:12.128685  656318 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 70.765µs
	I1207 23:35:12.128689  656318 cache.go:115] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1207 23:35:12.128683  656318 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 74.871µs
	I1207 23:35:12.128695  656318 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1207 23:35:12.128698  656318 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1207 23:35:12.128698  656318 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 198.957µs
	I1207 23:35:12.128706  656318 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1207 23:35:12.128749  656318 cache.go:107] acquiring lock: {Name:mk187eff8ce17bd71a4f3c7c012208c9c4122014 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:35:12.129000  656318 cache.go:115] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1207 23:35:12.129023  656318 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 322.927µs
	I1207 23:35:12.129035  656318 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1207 23:35:12.129044  656318 cache.go:87] Successfully saved all images to host disk.
	I1207 23:35:12.153514  656318 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:35:12.153537  656318 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:35:12.153559  656318 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:35:12.153597  656318 start.go:360] acquireMachinesLock for no-preload-313006: {Name:mk5eb3348861def558ca942a9632e734d86e74b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:35:12.153666  656318 start.go:364] duration metric: took 48.816µs to acquireMachinesLock for "no-preload-313006"
	I1207 23:35:12.153689  656318 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:35:12.153698  656318 fix.go:54] fixHost starting: 
	I1207 23:35:12.153990  656318 cli_runner.go:164] Run: docker container inspect no-preload-313006 --format={{.State.Status}}
	I1207 23:35:12.176776  656318 fix.go:112] recreateIfNeeded on no-preload-313006: state=Stopped err=<nil>
	W1207 23:35:12.176815  656318 fix.go:138] unexpected machine state, will restart: <nil>
	W1207 23:35:07.779077  647748 pod_ready.go:104] pod "coredns-5dd5756b68-vv8vq" is not "Ready", error: <nil>
	W1207 23:35:10.277391  647748 pod_ready.go:104] pod "coredns-5dd5756b68-vv8vq" is not "Ready", error: <nil>
	W1207 23:35:12.278306  647748 pod_ready.go:104] pod "coredns-5dd5756b68-vv8vq" is not "Ready", error: <nil>
	I1207 23:35:08.536012  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:35:08.536492  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:35:08.536550  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:35:08.536603  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:35:08.562895  610371 cri.go:89] found id: "a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:08.562919  610371 cri.go:89] found id: ""
	I1207 23:35:08.562931  610371 logs.go:282] 1 containers: [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96]
	I1207 23:35:08.562983  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:08.567203  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:35:08.567279  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:35:08.595795  610371 cri.go:89] found id: ""
	I1207 23:35:08.595824  610371 logs.go:282] 0 containers: []
	W1207 23:35:08.595835  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:35:08.595843  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:35:08.595907  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:35:08.622786  610371 cri.go:89] found id: ""
	I1207 23:35:08.622815  610371 logs.go:282] 0 containers: []
	W1207 23:35:08.622827  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:35:08.622836  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:35:08.622892  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:35:08.652163  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:08.652186  610371 cri.go:89] found id: ""
	I1207 23:35:08.652194  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:35:08.652257  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:08.656318  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:35:08.656413  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:35:08.683507  610371 cri.go:89] found id: ""
	I1207 23:35:08.683535  610371 logs.go:282] 0 containers: []
	W1207 23:35:08.683546  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:35:08.683553  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:35:08.683622  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:35:08.711226  610371 cri.go:89] found id: "0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:08.711248  610371 cri.go:89] found id: ""
	I1207 23:35:08.711258  610371 logs.go:282] 1 containers: [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b]
	I1207 23:35:08.711322  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:08.715234  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:35:08.715291  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:35:08.741725  610371 cri.go:89] found id: ""
	I1207 23:35:08.741749  610371 logs.go:282] 0 containers: []
	W1207 23:35:08.741757  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:35:08.741763  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:35:08.741819  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:35:08.769008  610371 cri.go:89] found id: ""
	I1207 23:35:08.769038  610371 logs.go:282] 0 containers: []
	W1207 23:35:08.769049  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:35:08.769062  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:35:08.769080  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:35:08.800220  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:35:08.800254  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:35:08.891250  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:35:08.891294  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:35:08.924849  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:35:08.924883  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:35:08.980767  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:35:08.980807  610371 logs.go:123] Gathering logs for kube-apiserver [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96] ...
	I1207 23:35:08.980824  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:09.010590  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:35:09.010620  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:09.037911  610371 logs.go:123] Gathering logs for kube-controller-manager [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b] ...
	I1207 23:35:09.037940  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:09.064244  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:35:09.064271  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:35:11.618410  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:35:11.618783  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:35:11.618838  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:35:11.618885  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:35:11.649406  610371 cri.go:89] found id: "a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:11.649432  610371 cri.go:89] found id: ""
	I1207 23:35:11.649443  610371 logs.go:282] 1 containers: [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96]
	I1207 23:35:11.649503  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:11.653924  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:35:11.653989  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:35:11.682619  610371 cri.go:89] found id: ""
	I1207 23:35:11.682649  610371 logs.go:282] 0 containers: []
	W1207 23:35:11.682661  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:35:11.682670  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:35:11.682723  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:35:11.713785  610371 cri.go:89] found id: ""
	I1207 23:35:11.713809  610371 logs.go:282] 0 containers: []
	W1207 23:35:11.713817  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:35:11.713825  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:35:11.713885  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:35:11.743249  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:11.743272  610371 cri.go:89] found id: ""
	I1207 23:35:11.743283  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:35:11.743345  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:11.747570  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:35:11.747629  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:35:11.775069  610371 cri.go:89] found id: ""
	I1207 23:35:11.775097  610371 logs.go:282] 0 containers: []
	W1207 23:35:11.775106  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:35:11.775115  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:35:11.775176  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:35:11.806376  610371 cri.go:89] found id: "0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:11.806395  610371 cri.go:89] found id: ""
	I1207 23:35:11.806404  610371 logs.go:282] 1 containers: [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b]
	I1207 23:35:11.806462  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:11.810858  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:35:11.810937  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:35:11.840493  610371 cri.go:89] found id: ""
	I1207 23:35:11.840517  610371 logs.go:282] 0 containers: []
	W1207 23:35:11.840526  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:35:11.840531  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:35:11.840592  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:35:11.870124  610371 cri.go:89] found id: ""
	I1207 23:35:11.870152  610371 logs.go:282] 0 containers: []
	W1207 23:35:11.870165  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:35:11.870174  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:35:11.870186  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:35:11.970358  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:35:11.970392  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:35:12.005052  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:35:12.005085  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:35:12.074835  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:35:12.074860  610371 logs.go:123] Gathering logs for kube-apiserver [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96] ...
	I1207 23:35:12.074878  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:12.113612  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:35:12.113649  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:12.145273  610371 logs.go:123] Gathering logs for kube-controller-manager [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b] ...
	I1207 23:35:12.145305  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:12.180088  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:35:12.180128  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:35:12.236007  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:35:12.236047  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1207 23:35:11.768724  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	W1207 23:35:14.267844  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	I1207 23:35:12.178474  656318 out.go:252] * Restarting existing docker container for "no-preload-313006" ...
	I1207 23:35:12.178568  656318 cli_runner.go:164] Run: docker start no-preload-313006
	I1207 23:35:12.438308  656318 cli_runner.go:164] Run: docker container inspect no-preload-313006 --format={{.State.Status}}
	I1207 23:35:12.457155  656318 kic.go:430] container "no-preload-313006" state is running.
	I1207 23:35:12.457571  656318 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-313006
	I1207 23:35:12.476733  656318 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/config.json ...
	I1207 23:35:12.476989  656318 machine.go:94] provisionDockerMachine start ...
	I1207 23:35:12.477103  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:35:12.496259  656318 main.go:143] libmachine: Using SSH client type: native
	I1207 23:35:12.496522  656318 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1207 23:35:12.496538  656318 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:35:12.497091  656318 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48192->127.0.0.1:33448: read: connection reset by peer
	I1207 23:35:15.629483  656318 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-313006
	
	I1207 23:35:15.629515  656318 ubuntu.go:182] provisioning hostname "no-preload-313006"
	I1207 23:35:15.629577  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:35:15.648744  656318 main.go:143] libmachine: Using SSH client type: native
	I1207 23:35:15.649071  656318 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1207 23:35:15.649100  656318 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-313006 && echo "no-preload-313006" | sudo tee /etc/hostname
	I1207 23:35:15.788999  656318 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-313006
	
	I1207 23:35:15.789079  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:35:15.808467  656318 main.go:143] libmachine: Using SSH client type: native
	I1207 23:35:15.808737  656318 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1207 23:35:15.808767  656318 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-313006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-313006/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-313006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:35:15.938166  656318 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:35:15.938209  656318 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:35:15.938239  656318 ubuntu.go:190] setting up certificates
	I1207 23:35:15.938256  656318 provision.go:84] configureAuth start
	I1207 23:35:15.938341  656318 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-313006
	I1207 23:35:15.956774  656318 provision.go:143] copyHostCerts
	I1207 23:35:15.956833  656318 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:35:15.956841  656318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:35:15.956910  656318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:35:15.956998  656318 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:35:15.957006  656318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:35:15.957032  656318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:35:15.957082  656318 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:35:15.957089  656318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:35:15.957111  656318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:35:15.957165  656318 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.no-preload-313006 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-313006]
	I1207 23:35:16.153011  656318 provision.go:177] copyRemoteCerts
	I1207 23:35:16.153084  656318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:35:16.153146  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:35:16.172313  656318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:35:16.265958  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1207 23:35:16.284340  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 23:35:16.302279  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:35:16.320037  656318 provision.go:87] duration metric: took 381.764174ms to configureAuth
	I1207 23:35:16.320062  656318 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:35:16.320237  656318 config.go:182] Loaded profile config "no-preload-313006": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:35:16.320386  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:35:16.339139  656318 main.go:143] libmachine: Using SSH client type: native
	I1207 23:35:16.339392  656318 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1207 23:35:16.339417  656318 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:35:16.651730  656318 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:35:16.651762  656318 machine.go:97] duration metric: took 4.174751851s to provisionDockerMachine
	I1207 23:35:16.651777  656318 start.go:293] postStartSetup for "no-preload-313006" (driver="docker")
	I1207 23:35:16.651805  656318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:35:16.651874  656318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:35:16.651928  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:35:16.672055  656318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:35:16.767166  656318 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:35:16.770993  656318 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:35:16.771023  656318 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:35:16.771036  656318 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:35:16.771105  656318 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:35:16.771209  656318 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:35:16.771336  656318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:35:16.779720  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:35:16.797612  656318 start.go:296] duration metric: took 145.818898ms for postStartSetup
	I1207 23:35:16.797700  656318 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:35:16.797760  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:35:16.816136  656318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:35:16.907681  656318 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:35:16.912551  656318 fix.go:56] duration metric: took 4.758844234s for fixHost
	I1207 23:35:16.912579  656318 start.go:83] releasing machines lock for "no-preload-313006", held for 4.758900576s
	I1207 23:35:16.912658  656318 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-313006
	I1207 23:35:16.931785  656318 ssh_runner.go:195] Run: cat /version.json
	I1207 23:35:16.931808  656318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:35:16.931834  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:35:16.931868  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	W1207 23:35:14.777930  647748 pod_ready.go:104] pod "coredns-5dd5756b68-vv8vq" is not "Ready", error: <nil>
	W1207 23:35:16.778148  647748 pod_ready.go:104] pod "coredns-5dd5756b68-vv8vq" is not "Ready", error: <nil>
	I1207 23:35:14.770590  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:35:14.770979  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:35:14.771036  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:35:14.771099  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:35:14.799519  610371 cri.go:89] found id: "a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:14.799546  610371 cri.go:89] found id: ""
	I1207 23:35:14.799554  610371 logs.go:282] 1 containers: [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96]
	I1207 23:35:14.799612  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:14.803831  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:35:14.803893  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:35:14.831634  610371 cri.go:89] found id: ""
	I1207 23:35:14.831659  610371 logs.go:282] 0 containers: []
	W1207 23:35:14.831668  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:35:14.831674  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:35:14.831724  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:35:14.859086  610371 cri.go:89] found id: ""
	I1207 23:35:14.859112  610371 logs.go:282] 0 containers: []
	W1207 23:35:14.859123  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:35:14.859131  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:35:14.859194  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:35:14.886672  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:14.886698  610371 cri.go:89] found id: ""
	I1207 23:35:14.886708  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:35:14.886778  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:14.890772  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:35:14.890838  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:35:14.918055  610371 cri.go:89] found id: ""
	I1207 23:35:14.918083  610371 logs.go:282] 0 containers: []
	W1207 23:35:14.918094  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:35:14.918103  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:35:14.918166  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:35:14.945022  610371 cri.go:89] found id: "0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:14.945039  610371 cri.go:89] found id: ""
	I1207 23:35:14.945047  610371 logs.go:282] 1 containers: [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b]
	I1207 23:35:14.945105  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:14.949226  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:35:14.949288  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:35:14.977021  610371 cri.go:89] found id: ""
	I1207 23:35:14.977056  610371 logs.go:282] 0 containers: []
	W1207 23:35:14.977068  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:35:14.977077  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:35:14.977145  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:35:15.004617  610371 cri.go:89] found id: ""
	I1207 23:35:15.004645  610371 logs.go:282] 0 containers: []
	W1207 23:35:15.004659  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:35:15.004670  610371 logs.go:123] Gathering logs for kube-apiserver [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96] ...
	I1207 23:35:15.004683  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:15.035811  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:35:15.035845  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:15.063487  610371 logs.go:123] Gathering logs for kube-controller-manager [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b] ...
	I1207 23:35:15.063518  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:15.090238  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:35:15.090271  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:35:15.142350  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:35:15.142384  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:35:15.173149  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:35:15.173177  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:35:15.258314  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:35:15.258368  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:35:15.292647  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:35:15.292682  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:35:15.350650  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:35:16.952030  656318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:35:16.952207  656318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:35:17.100538  656318 ssh_runner.go:195] Run: systemctl --version
	I1207 23:35:17.107283  656318 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:35:17.142202  656318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:35:17.146927  656318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:35:17.146987  656318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:35:17.155750  656318 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 23:35:17.155770  656318 start.go:496] detecting cgroup driver to use...
	I1207 23:35:17.155808  656318 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:35:17.155848  656318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:35:17.170400  656318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:35:17.182815  656318 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:35:17.182868  656318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:35:17.197759  656318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:35:17.210593  656318 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:35:17.296103  656318 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:35:17.380613  656318 docker.go:234] disabling docker service ...
	I1207 23:35:17.380687  656318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:35:17.395177  656318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:35:17.407843  656318 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:35:17.494399  656318 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:35:17.577708  656318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:35:17.590916  656318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:35:17.605817  656318 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:35:17.605875  656318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:17.614997  656318 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:35:17.615071  656318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:17.624281  656318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:17.633698  656318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:17.643425  656318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:35:17.653185  656318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:17.663667  656318 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:17.672863  656318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:17.683221  656318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:35:17.691500  656318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:35:17.699401  656318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:35:17.783727  656318 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:35:17.936763  656318 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:35:17.936836  656318 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:35:17.942075  656318 start.go:564] Will wait 60s for crictl version
	I1207 23:35:17.942150  656318 ssh_runner.go:195] Run: which crictl
	I1207 23:35:17.946683  656318 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:35:17.975279  656318 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:35:17.975381  656318 ssh_runner.go:195] Run: crio --version
	I1207 23:35:18.006830  656318 ssh_runner.go:195] Run: crio --version
	I1207 23:35:18.040015  656318 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1207 23:35:18.041321  656318 cli_runner.go:164] Run: docker network inspect no-preload-313006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:35:18.061342  656318 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1207 23:35:18.066102  656318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:35:18.078024  656318 kubeadm.go:884] updating cluster {Name:no-preload-313006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-313006 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:35:18.078159  656318 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1207 23:35:18.078214  656318 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:35:18.112713  656318 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:35:18.112734  656318 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:35:18.112742  656318 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1207 23:35:18.112881  656318 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-313006 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-313006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:35:18.112966  656318 ssh_runner.go:195] Run: crio config
	I1207 23:35:18.164942  656318 cni.go:84] Creating CNI manager for ""
	I1207 23:35:18.164971  656318 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:35:18.164988  656318 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 23:35:18.165020  656318 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-313006 NodeName:no-preload-313006 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:35:18.165188  656318 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-313006"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:35:18.165268  656318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1207 23:35:18.174644  656318 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:35:18.174720  656318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 23:35:18.183368  656318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1207 23:35:18.197285  656318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1207 23:35:18.211469  656318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1207 23:35:18.226652  656318 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1207 23:35:18.230797  656318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:35:18.242628  656318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:35:18.327553  656318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:35:18.355034  656318 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006 for IP: 192.168.85.2
	I1207 23:35:18.355061  656318 certs.go:195] generating shared ca certs ...
	I1207 23:35:18.355087  656318 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:35:18.355231  656318 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:35:18.355270  656318 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:35:18.355280  656318 certs.go:257] generating profile certs ...
	I1207 23:35:18.355400  656318 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/client.key
	I1207 23:35:18.355469  656318 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/apiserver.key.717a55f9
	I1207 23:35:18.355506  656318 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/proxy-client.key
	I1207 23:35:18.355630  656318 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:35:18.355672  656318 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:35:18.355686  656318 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:35:18.355716  656318 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:35:18.355753  656318 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:35:18.355787  656318 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:35:18.355833  656318 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:35:18.356409  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:35:18.377099  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:35:18.397963  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:35:18.420060  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:35:18.446621  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1207 23:35:18.468058  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 23:35:18.486707  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:35:18.505018  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 23:35:18.523682  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:35:18.542031  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:35:18.560957  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:35:18.580157  656318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:35:18.593339  656318 ssh_runner.go:195] Run: openssl version
	I1207 23:35:18.599350  656318 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:35:18.606639  656318 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:35:18.614063  656318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:35:18.617803  656318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:35:18.617866  656318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:35:18.653512  656318 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:35:18.662289  656318 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:35:18.670374  656318 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:35:18.678482  656318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:35:18.682677  656318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:35:18.682742  656318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:35:18.717952  656318 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:35:18.726286  656318 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:35:18.734160  656318 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:35:18.741914  656318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:35:18.745795  656318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:35:18.745854  656318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:35:18.782639  656318 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:35:18.791005  656318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:35:18.795082  656318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 23:35:18.829997  656318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 23:35:18.871259  656318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 23:35:18.917443  656318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 23:35:18.968560  656318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 23:35:19.019600  656318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 23:35:19.060297  656318 kubeadm.go:401] StartCluster: {Name:no-preload-313006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-313006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:35:19.060459  656318 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:35:19.060516  656318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:35:19.096921  656318 cri.go:89] found id: "7a318b0832368150c50b8e6bcc0b249c6c0f5e0835f526a9036a3f9d6818cc85"
	I1207 23:35:19.096947  656318 cri.go:89] found id: "404e1d5beb2da9d3cc45722c51fc2e1c7b0c587a72d76030ae16a0117eb8350a"
	I1207 23:35:19.096954  656318 cri.go:89] found id: "087d0f5345ac825bcf193ab138e126157b165b5aa86f1b652afd90640d7fda6e"
	I1207 23:35:19.096959  656318 cri.go:89] found id: "1902052b7fa9a51b713591332e8f8f19d13383667710cc98390abfe859d91e2c"
	I1207 23:35:19.096964  656318 cri.go:89] found id: ""
	I1207 23:35:19.097016  656318 ssh_runner.go:195] Run: sudo runc list -f json
	W1207 23:35:19.110261  656318 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:35:19Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:35:19.110457  656318 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:35:19.118474  656318 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1207 23:35:19.118492  656318 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1207 23:35:19.118538  656318 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 23:35:19.126045  656318 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:35:19.126976  656318 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-313006" does not appear in /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:35:19.127658  656318 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-389542/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-313006" cluster setting kubeconfig missing "no-preload-313006" context setting]
	I1207 23:35:19.128563  656318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:35:19.130361  656318 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 23:35:19.138196  656318 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1207 23:35:19.138225  656318 kubeadm.go:602] duration metric: took 19.726131ms to restartPrimaryControlPlane
	I1207 23:35:19.138235  656318 kubeadm.go:403] duration metric: took 77.955614ms to StartCluster
	I1207 23:35:19.138251  656318 settings.go:142] acquiring lock: {Name:mk372e79badb9c8f25216fa891cff6dfa96ea2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:35:19.138320  656318 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:35:19.140789  656318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:35:19.141076  656318 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:35:19.141139  656318 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1207 23:35:19.141265  656318 addons.go:70] Setting storage-provisioner=true in profile "no-preload-313006"
	I1207 23:35:19.141290  656318 addons.go:239] Setting addon storage-provisioner=true in "no-preload-313006"
	I1207 23:35:19.141288  656318 addons.go:70] Setting dashboard=true in profile "no-preload-313006"
	W1207 23:35:19.141304  656318 addons.go:248] addon storage-provisioner should already be in state true
	I1207 23:35:19.141312  656318 addons.go:239] Setting addon dashboard=true in "no-preload-313006"
	I1207 23:35:19.141310  656318 config.go:182] Loaded profile config "no-preload-313006": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	W1207 23:35:19.141321  656318 addons.go:248] addon dashboard should already be in state true
	I1207 23:35:19.141364  656318 host.go:66] Checking if "no-preload-313006" exists ...
	I1207 23:35:19.141376  656318 addons.go:70] Setting default-storageclass=true in profile "no-preload-313006"
	I1207 23:35:19.141392  656318 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-313006"
	I1207 23:35:19.141364  656318 host.go:66] Checking if "no-preload-313006" exists ...
	I1207 23:35:19.141736  656318 cli_runner.go:164] Run: docker container inspect no-preload-313006 --format={{.State.Status}}
	I1207 23:35:19.141908  656318 cli_runner.go:164] Run: docker container inspect no-preload-313006 --format={{.State.Status}}
	I1207 23:35:19.142215  656318 cli_runner.go:164] Run: docker container inspect no-preload-313006 --format={{.State.Status}}
	I1207 23:35:19.144950  656318 out.go:179] * Verifying Kubernetes components...
	I1207 23:35:19.146370  656318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:35:19.168034  656318 addons.go:239] Setting addon default-storageclass=true in "no-preload-313006"
	W1207 23:35:19.168061  656318 addons.go:248] addon default-storageclass should already be in state true
	I1207 23:35:19.168089  656318 host.go:66] Checking if "no-preload-313006" exists ...
	I1207 23:35:19.168608  656318 cli_runner.go:164] Run: docker container inspect no-preload-313006 --format={{.State.Status}}
	I1207 23:35:19.171207  656318 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1207 23:35:19.171237  656318 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 23:35:19.172376  656318 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:35:19.172401  656318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 23:35:19.172466  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:35:19.173497  656318 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1207 23:35:16.268349  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	W1207 23:35:18.767379  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	I1207 23:35:19.174674  656318 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1207 23:35:19.174694  656318 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1207 23:35:19.174770  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:35:19.193085  656318 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 23:35:19.193110  656318 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 23:35:19.193171  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:35:19.205950  656318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:35:19.207362  656318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:35:19.232071  656318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:35:19.299719  656318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:35:19.315306  656318 node_ready.go:35] waiting up to 6m0s for node "no-preload-313006" to be "Ready" ...
	I1207 23:35:19.325691  656318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:35:19.325833  656318 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1207 23:35:19.325863  656318 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1207 23:35:19.341669  656318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 23:35:19.343525  656318 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1207 23:35:19.343552  656318 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1207 23:35:19.361500  656318 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1207 23:35:19.361525  656318 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1207 23:35:19.378454  656318 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1207 23:35:19.378479  656318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1207 23:35:19.396790  656318 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1207 23:35:19.396818  656318 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1207 23:35:19.412274  656318 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1207 23:35:19.412299  656318 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1207 23:35:19.427184  656318 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1207 23:35:19.427208  656318 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1207 23:35:19.442505  656318 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1207 23:35:19.442533  656318 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1207 23:35:19.459824  656318 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1207 23:35:19.459852  656318 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1207 23:35:19.476388  656318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1207 23:35:20.275752  656318 node_ready.go:49] node "no-preload-313006" is "Ready"
	I1207 23:35:20.275790  656318 node_ready.go:38] duration metric: took 960.419225ms for node "no-preload-313006" to be "Ready" ...
	I1207 23:35:20.275808  656318 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:35:20.275862  656318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:35:20.843041  656318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.517314986s)
	I1207 23:35:20.843106  656318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.501413543s)
	I1207 23:35:20.843277  656318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.366851394s)
	I1207 23:35:20.843416  656318 api_server.go:72] duration metric: took 1.702306398s to wait for apiserver process to appear ...
	I1207 23:35:20.843443  656318 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:35:20.843467  656318 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1207 23:35:20.847022  656318 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-313006 addons enable metrics-server
	
	I1207 23:35:20.848990  656318 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1207 23:35:20.849018  656318 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1207 23:35:20.853374  656318 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1207 23:35:20.854578  656318 addons.go:530] duration metric: took 1.713446995s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1207 23:35:21.344271  656318 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1207 23:35:21.349572  656318 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1207 23:35:21.349610  656318 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1207 23:35:21.844301  656318 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1207 23:35:21.848684  656318 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1207 23:35:21.849641  656318 api_server.go:141] control plane version: v1.35.0-beta.0
	I1207 23:35:21.849665  656318 api_server.go:131] duration metric: took 1.006215022s to wait for apiserver health ...
	I1207 23:35:21.849676  656318 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:35:21.853103  656318 system_pods.go:59] 8 kube-system pods found
	I1207 23:35:21.853131  656318 system_pods.go:61] "coredns-7d764666f9-btjrp" [c81bd338-0a5e-4937-8442-bbacd5f685c2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:35:21.853139  656318 system_pods.go:61] "etcd-no-preload-313006" [2124ac32-ed11-49d4-b522-e0bb8b268bb1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:35:21.853145  656318 system_pods.go:61] "kindnet-nzf5r" [8d7ee556-9db1-49ce-a52b-403f54085f1f] Running
	I1207 23:35:21.853152  656318 system_pods.go:61] "kube-apiserver-no-preload-313006" [3c161ca5-34a9-4712-8eb3-6d444b18fae0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:35:21.853158  656318 system_pods.go:61] "kube-controller-manager-no-preload-313006" [8b681c4d-7203-410e-a987-5f988f352aed] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:35:21.853165  656318 system_pods.go:61] "kube-proxy-xw4pf" [ebc0bfad-9d66-4e97-ba23-878bf95416a6] Running
	I1207 23:35:21.853172  656318 system_pods.go:61] "kube-scheduler-no-preload-313006" [40d9aeaa-01fd-49cc-9e20-4339df06b915] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:35:21.853178  656318 system_pods.go:61] "storage-provisioner" [9c75fba7-bec3-421e-9f99-b51827afb29d] Running
	I1207 23:35:21.853185  656318 system_pods.go:74] duration metric: took 3.502188ms to wait for pod list to return data ...
	I1207 23:35:21.853194  656318 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:35:21.855301  656318 default_sa.go:45] found service account: "default"
	I1207 23:35:21.855321  656318 default_sa.go:55] duration metric: took 2.121154ms for default service account to be created ...
	I1207 23:35:21.855349  656318 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 23:35:21.857747  656318 system_pods.go:86] 8 kube-system pods found
	I1207 23:35:21.857774  656318 system_pods.go:89] "coredns-7d764666f9-btjrp" [c81bd338-0a5e-4937-8442-bbacd5f685c2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:35:21.857782  656318 system_pods.go:89] "etcd-no-preload-313006" [2124ac32-ed11-49d4-b522-e0bb8b268bb1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:35:21.857787  656318 system_pods.go:89] "kindnet-nzf5r" [8d7ee556-9db1-49ce-a52b-403f54085f1f] Running
	I1207 23:35:21.857793  656318 system_pods.go:89] "kube-apiserver-no-preload-313006" [3c161ca5-34a9-4712-8eb3-6d444b18fae0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:35:21.857801  656318 system_pods.go:89] "kube-controller-manager-no-preload-313006" [8b681c4d-7203-410e-a987-5f988f352aed] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:35:21.857805  656318 system_pods.go:89] "kube-proxy-xw4pf" [ebc0bfad-9d66-4e97-ba23-878bf95416a6] Running
	I1207 23:35:21.857820  656318 system_pods.go:89] "kube-scheduler-no-preload-313006" [40d9aeaa-01fd-49cc-9e20-4339df06b915] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:35:21.857827  656318 system_pods.go:89] "storage-provisioner" [9c75fba7-bec3-421e-9f99-b51827afb29d] Running
	I1207 23:35:21.857833  656318 system_pods.go:126] duration metric: took 2.478892ms to wait for k8s-apps to be running ...
	I1207 23:35:21.857843  656318 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:35:21.857886  656318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:35:21.871226  656318 system_svc.go:56] duration metric: took 13.375207ms WaitForService to wait for kubelet
	I1207 23:35:21.871251  656318 kubeadm.go:587] duration metric: took 2.730144893s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:35:21.871273  656318 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:35:21.874022  656318 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:35:21.874047  656318 node_conditions.go:123] node cpu capacity is 8
	I1207 23:35:21.874066  656318 node_conditions.go:105] duration metric: took 2.787587ms to run NodePressure ...
	I1207 23:35:21.874082  656318 start.go:242] waiting for startup goroutines ...
	I1207 23:35:21.874091  656318 start.go:247] waiting for cluster config update ...
	I1207 23:35:21.874105  656318 start.go:256] writing updated cluster config ...
	I1207 23:35:21.874408  656318 ssh_runner.go:195] Run: rm -f paused
	I1207 23:35:21.878113  656318 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:35:21.881662  656318 pod_ready.go:83] waiting for pod "coredns-7d764666f9-btjrp" in "kube-system" namespace to be "Ready" or be gone ...
	W1207 23:35:19.278553  647748 pod_ready.go:104] pod "coredns-5dd5756b68-vv8vq" is not "Ready", error: <nil>
	W1207 23:35:21.286435  647748 pod_ready.go:104] pod "coredns-5dd5756b68-vv8vq" is not "Ready", error: <nil>
	I1207 23:35:17.851662  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:35:17.852200  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:35:17.852262  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:35:17.852348  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:35:17.883096  610371 cri.go:89] found id: "a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:17.883119  610371 cri.go:89] found id: ""
	I1207 23:35:17.883129  610371 logs.go:282] 1 containers: [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96]
	I1207 23:35:17.883192  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:17.887460  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:35:17.887546  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:35:17.916961  610371 cri.go:89] found id: ""
	I1207 23:35:17.916994  610371 logs.go:282] 0 containers: []
	W1207 23:35:17.917006  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:35:17.917014  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:35:17.917075  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:35:17.945282  610371 cri.go:89] found id: ""
	I1207 23:35:17.945307  610371 logs.go:282] 0 containers: []
	W1207 23:35:17.945317  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:35:17.945335  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:35:17.945398  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:35:17.975415  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:17.975435  610371 cri.go:89] found id: ""
	I1207 23:35:17.975446  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:35:17.975502  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:17.979886  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:35:17.979942  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:35:18.008898  610371 cri.go:89] found id: ""
	I1207 23:35:18.008922  610371 logs.go:282] 0 containers: []
	W1207 23:35:18.008932  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:35:18.008941  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:35:18.008998  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:35:18.037934  610371 cri.go:89] found id: "0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:18.037963  610371 cri.go:89] found id: ""
	I1207 23:35:18.037975  610371 logs.go:282] 1 containers: [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b]
	I1207 23:35:18.038039  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:18.042097  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:35:18.042153  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:35:18.071090  610371 cri.go:89] found id: ""
	I1207 23:35:18.071116  610371 logs.go:282] 0 containers: []
	W1207 23:35:18.071128  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:35:18.071135  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:35:18.071203  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:35:18.100284  610371 cri.go:89] found id: ""
	I1207 23:35:18.100317  610371 logs.go:282] 0 containers: []
	W1207 23:35:18.100353  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:35:18.100366  610371 logs.go:123] Gathering logs for kube-controller-manager [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b] ...
	I1207 23:35:18.100383  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:18.131797  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:35:18.131829  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:35:18.187969  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:35:18.187999  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:35:18.219984  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:35:18.220013  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:35:18.317010  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:35:18.317047  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:35:18.350183  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:35:18.350217  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:35:18.414137  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:35:18.414161  610371 logs.go:123] Gathering logs for kube-apiserver [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96] ...
	I1207 23:35:18.414177  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:18.458306  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:35:18.458356  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:20.987397  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:35:20.987852  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:35:20.987919  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:35:20.987991  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:35:21.015394  610371 cri.go:89] found id: "a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:21.015416  610371 cri.go:89] found id: ""
	I1207 23:35:21.015424  610371 logs.go:282] 1 containers: [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96]
	I1207 23:35:21.015476  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:21.019925  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:35:21.020017  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:35:21.049407  610371 cri.go:89] found id: ""
	I1207 23:35:21.049437  610371 logs.go:282] 0 containers: []
	W1207 23:35:21.049449  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:35:21.049458  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:35:21.049516  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:35:21.080281  610371 cri.go:89] found id: ""
	I1207 23:35:21.080304  610371 logs.go:282] 0 containers: []
	W1207 23:35:21.080312  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:35:21.080319  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:35:21.080393  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:35:21.107885  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:21.107907  610371 cri.go:89] found id: ""
	I1207 23:35:21.107917  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:35:21.107981  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:21.111937  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:35:21.111992  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:35:21.138289  610371 cri.go:89] found id: ""
	I1207 23:35:21.138322  610371 logs.go:282] 0 containers: []
	W1207 23:35:21.138353  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:35:21.138363  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:35:21.138438  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:35:21.172080  610371 cri.go:89] found id: "0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:21.172102  610371 cri.go:89] found id: ""
	I1207 23:35:21.172110  610371 logs.go:282] 1 containers: [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b]
	I1207 23:35:21.172161  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:21.176899  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:35:21.176960  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:35:21.206810  610371 cri.go:89] found id: ""
	I1207 23:35:21.206840  610371 logs.go:282] 0 containers: []
	W1207 23:35:21.206851  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:35:21.206861  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:35:21.206984  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:35:21.234708  610371 cri.go:89] found id: ""
	I1207 23:35:21.234738  610371 logs.go:282] 0 containers: []
	W1207 23:35:21.234750  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:35:21.234761  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:35:21.234774  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:35:21.271854  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:35:21.271887  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:35:21.393167  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:35:21.393215  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:35:21.433752  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:35:21.433790  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:35:21.502990  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:35:21.503011  610371 logs.go:123] Gathering logs for kube-apiserver [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96] ...
	I1207 23:35:21.503026  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:21.539520  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:35:21.539556  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:21.567587  610371 logs.go:123] Gathering logs for kube-controller-manager [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b] ...
	I1207 23:35:21.567614  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:21.595533  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:35:21.595560  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1207 23:35:20.768645  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	W1207 23:35:23.268591  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	W1207 23:35:23.777530  647748 pod_ready.go:104] pod "coredns-5dd5756b68-vv8vq" is not "Ready", error: <nil>
	I1207 23:35:24.780184  647748 pod_ready.go:94] pod "coredns-5dd5756b68-vv8vq" is "Ready"
	I1207 23:35:24.780215  647748 pod_ready.go:86] duration metric: took 34.50798829s for pod "coredns-5dd5756b68-vv8vq" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:24.785416  647748 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:24.792173  647748 pod_ready.go:94] pod "etcd-old-k8s-version-320477" is "Ready"
	I1207 23:35:24.792206  647748 pod_ready.go:86] duration metric: took 6.754925ms for pod "etcd-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:24.795277  647748 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:24.800544  647748 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-320477" is "Ready"
	I1207 23:35:24.800574  647748 pod_ready.go:86] duration metric: took 5.271021ms for pod "kube-apiserver-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:24.803774  647748 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:24.975538  647748 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-320477" is "Ready"
	I1207 23:35:24.975568  647748 pod_ready.go:86] duration metric: took 171.769801ms for pod "kube-controller-manager-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:25.176874  647748 pod_ready.go:83] waiting for pod "kube-proxy-vlx4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:25.576538  647748 pod_ready.go:94] pod "kube-proxy-vlx4n" is "Ready"
	I1207 23:35:25.576571  647748 pod_ready.go:86] duration metric: took 399.665404ms for pod "kube-proxy-vlx4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:25.777348  647748 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:26.176424  647748 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-320477" is "Ready"
	I1207 23:35:26.176458  647748 pod_ready.go:86] duration metric: took 399.077633ms for pod "kube-scheduler-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:26.176474  647748 pod_ready.go:40] duration metric: took 35.908019433s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:35:26.241491  647748 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1207 23:35:26.246676  647748 out.go:203] 
	W1207 23:35:26.248164  647748 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1207 23:35:26.249591  647748 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1207 23:35:26.250938  647748 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-320477" cluster and "default" namespace by default
	W1207 23:35:23.887383  656318 pod_ready.go:104] pod "coredns-7d764666f9-btjrp" is not "Ready", error: <nil>
	W1207 23:35:25.887666  656318 pod_ready.go:104] pod "coredns-7d764666f9-btjrp" is not "Ready", error: <nil>
	I1207 23:35:24.145199  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:35:24.145681  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:35:24.145739  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:35:24.145803  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:35:24.174832  610371 cri.go:89] found id: "a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:24.174853  610371 cri.go:89] found id: ""
	I1207 23:35:24.174863  610371 logs.go:282] 1 containers: [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96]
	I1207 23:35:24.174926  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:24.178925  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:35:24.178994  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:35:24.207384  610371 cri.go:89] found id: ""
	I1207 23:35:24.207408  610371 logs.go:282] 0 containers: []
	W1207 23:35:24.207416  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:35:24.207422  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:35:24.207477  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:35:24.235647  610371 cri.go:89] found id: ""
	I1207 23:35:24.235672  610371 logs.go:282] 0 containers: []
	W1207 23:35:24.235683  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:35:24.235691  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:35:24.235751  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:35:24.261906  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:24.261935  610371 cri.go:89] found id: ""
	I1207 23:35:24.261945  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:35:24.262006  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:24.266007  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:35:24.266081  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:35:24.294095  610371 cri.go:89] found id: ""
	I1207 23:35:24.294119  610371 logs.go:282] 0 containers: []
	W1207 23:35:24.294127  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:35:24.294133  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:35:24.294185  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:35:24.323467  610371 cri.go:89] found id: "0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:24.323493  610371 cri.go:89] found id: ""
	I1207 23:35:24.323504  610371 logs.go:282] 1 containers: [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b]
	I1207 23:35:24.323570  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:24.328418  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:35:24.328498  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:35:24.356897  610371 cri.go:89] found id: ""
	I1207 23:35:24.356925  610371 logs.go:282] 0 containers: []
	W1207 23:35:24.356933  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:35:24.356941  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:35:24.357008  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:35:24.383099  610371 cri.go:89] found id: ""
	I1207 23:35:24.383129  610371 logs.go:282] 0 containers: []
	W1207 23:35:24.383139  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:35:24.383151  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:35:24.383166  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:35:24.446213  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:35:24.446233  610371 logs.go:123] Gathering logs for kube-apiserver [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96] ...
	I1207 23:35:24.446246  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:24.483568  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:35:24.483599  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:24.513874  610371 logs.go:123] Gathering logs for kube-controller-manager [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b] ...
	I1207 23:35:24.513902  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:24.542160  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:35:24.542188  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:35:24.596417  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:35:24.596462  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:35:24.629041  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:35:24.629071  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:35:24.730686  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:35:24.730734  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:35:27.278787  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:35:27.279224  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:35:27.279287  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:35:27.279379  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:35:27.313549  610371 cri.go:89] found id: "a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:27.313579  610371 cri.go:89] found id: ""
	I1207 23:35:27.313590  610371 logs.go:282] 1 containers: [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96]
	I1207 23:35:27.313658  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:27.317990  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:35:27.318066  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:35:27.348758  610371 cri.go:89] found id: ""
	I1207 23:35:27.348790  610371 logs.go:282] 0 containers: []
	W1207 23:35:27.348801  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:35:27.348809  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:35:27.348862  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:35:27.378751  610371 cri.go:89] found id: ""
	I1207 23:35:27.378781  610371 logs.go:282] 0 containers: []
	W1207 23:35:27.378792  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:35:27.378800  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:35:27.378863  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:35:27.409470  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:27.409496  610371 cri.go:89] found id: ""
	I1207 23:35:27.409507  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:35:27.409573  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:27.413743  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:35:27.413803  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:35:27.438886  610371 cri.go:89] found id: ""
	I1207 23:35:27.438908  610371 logs.go:282] 0 containers: []
	W1207 23:35:27.438915  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:35:27.438922  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:35:27.438969  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:35:27.465832  610371 cri.go:89] found id: "0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:27.465851  610371 cri.go:89] found id: ""
	I1207 23:35:27.465859  610371 logs.go:282] 1 containers: [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b]
	I1207 23:35:27.465907  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:27.470028  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:35:27.470087  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:35:27.498004  610371 cri.go:89] found id: ""
	I1207 23:35:27.498031  610371 logs.go:282] 0 containers: []
	W1207 23:35:27.498040  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:35:27.498046  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:35:27.498104  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:35:27.531695  610371 cri.go:89] found id: ""
	I1207 23:35:27.531726  610371 logs.go:282] 0 containers: []
	W1207 23:35:27.531738  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:35:27.531752  610371 logs.go:123] Gathering logs for kube-apiserver [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96] ...
	I1207 23:35:27.531770  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:27.567961  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:35:27.567996  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:27.597994  610371 logs.go:123] Gathering logs for kube-controller-manager [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b] ...
	I1207 23:35:27.598027  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:27.624755  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:35:27.624783  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:35:27.673747  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:35:27.673788  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:35:27.705594  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:35:27.705622  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1207 23:35:25.771909  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	W1207 23:35:28.268503  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	W1207 23:35:27.888403  656318 pod_ready.go:104] pod "coredns-7d764666f9-btjrp" is not "Ready", error: <nil>
	W1207 23:35:30.395579  656318 pod_ready.go:104] pod "coredns-7d764666f9-btjrp" is not "Ready", error: <nil>
	I1207 23:35:27.796064  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:35:27.796102  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:35:27.828122  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:35:27.828157  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:35:27.900211  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:35:30.402302  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:35:30.402826  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:35:30.402884  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:35:30.402941  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:35:30.432100  610371 cri.go:89] found id: "a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:30.432124  610371 cri.go:89] found id: ""
	I1207 23:35:30.432134  610371 logs.go:282] 1 containers: [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96]
	I1207 23:35:30.432199  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:30.436216  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:35:30.436285  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:35:30.463194  610371 cri.go:89] found id: ""
	I1207 23:35:30.463222  610371 logs.go:282] 0 containers: []
	W1207 23:35:30.463234  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:35:30.463242  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:35:30.463305  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:35:30.490300  610371 cri.go:89] found id: ""
	I1207 23:35:30.490345  610371 logs.go:282] 0 containers: []
	W1207 23:35:30.490366  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:35:30.490373  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:35:30.490471  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:35:30.519350  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:30.519375  610371 cri.go:89] found id: ""
	I1207 23:35:30.519386  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:35:30.519448  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:30.524212  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:35:30.524281  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:35:30.556293  610371 cri.go:89] found id: ""
	I1207 23:35:30.556341  610371 logs.go:282] 0 containers: []
	W1207 23:35:30.556353  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:35:30.556361  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:35:30.556420  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:35:30.585462  610371 cri.go:89] found id: "0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:30.585485  610371 cri.go:89] found id: ""
	I1207 23:35:30.585495  610371 logs.go:282] 1 containers: [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b]
	I1207 23:35:30.585560  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:30.589797  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:35:30.589875  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:35:30.617489  610371 cri.go:89] found id: ""
	I1207 23:35:30.617519  610371 logs.go:282] 0 containers: []
	W1207 23:35:30.617527  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:35:30.617534  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:35:30.617590  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:35:30.646366  610371 cri.go:89] found id: ""
	I1207 23:35:30.646397  610371 logs.go:282] 0 containers: []
	W1207 23:35:30.646409  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:35:30.646420  610371 logs.go:123] Gathering logs for kube-apiserver [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96] ...
	I1207 23:35:30.646439  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:30.680062  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:35:30.680097  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:30.707582  610371 logs.go:123] Gathering logs for kube-controller-manager [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b] ...
	I1207 23:35:30.707620  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:30.737601  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:35:30.737631  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:35:30.788229  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:35:30.788262  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:35:30.819038  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:35:30.819064  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:35:30.905293  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:35:30.905341  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:35:30.938667  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:35:30.938699  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:35:30.995828  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1207 23:35:30.767929  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	W1207 23:35:33.268083  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	W1207 23:35:35.268148  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	W1207 23:35:32.886623  656318 pod_ready.go:104] pod "coredns-7d764666f9-btjrp" is not "Ready", error: <nil>
	W1207 23:35:34.887514  656318 pod_ready.go:104] pod "coredns-7d764666f9-btjrp" is not "Ready", error: <nil>
	W1207 23:35:36.888219  656318 pod_ready.go:104] pod "coredns-7d764666f9-btjrp" is not "Ready", error: <nil>
	I1207 23:35:33.496490  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:35:33.496969  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:35:33.497025  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:35:33.497077  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:35:33.526638  610371 cri.go:89] found id: "a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:33.526662  610371 cri.go:89] found id: ""
	I1207 23:35:33.526671  610371 logs.go:282] 1 containers: [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96]
	I1207 23:35:33.526724  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:33.530825  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:35:33.530886  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:35:33.558539  610371 cri.go:89] found id: ""
	I1207 23:35:33.558571  610371 logs.go:282] 0 containers: []
	W1207 23:35:33.558582  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:35:33.558590  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:35:33.558662  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:35:33.588286  610371 cri.go:89] found id: ""
	I1207 23:35:33.588313  610371 logs.go:282] 0 containers: []
	W1207 23:35:33.588340  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:35:33.588350  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:35:33.588418  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:35:33.617392  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:33.617413  610371 cri.go:89] found id: ""
	I1207 23:35:33.617422  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:35:33.617497  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:33.621633  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:35:33.621701  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:35:33.650021  610371 cri.go:89] found id: ""
	I1207 23:35:33.650052  610371 logs.go:282] 0 containers: []
	W1207 23:35:33.650063  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:35:33.650072  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:35:33.650130  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:35:33.679493  610371 cri.go:89] found id: "0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:33.679515  610371 cri.go:89] found id: ""
	I1207 23:35:33.679528  610371 logs.go:282] 1 containers: [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b]
	I1207 23:35:33.679578  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:33.684158  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:35:33.684242  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:35:33.713020  610371 cri.go:89] found id: ""
	I1207 23:35:33.713054  610371 logs.go:282] 0 containers: []
	W1207 23:35:33.713065  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:35:33.713072  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:35:33.713133  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:35:33.741503  610371 cri.go:89] found id: ""
	I1207 23:35:33.741546  610371 logs.go:282] 0 containers: []
	W1207 23:35:33.741560  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:35:33.741572  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:35:33.741589  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:33.769103  610371 logs.go:123] Gathering logs for kube-controller-manager [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b] ...
	I1207 23:35:33.769130  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:33.796567  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:35:33.796597  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:35:33.848201  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:35:33.848239  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:35:33.880229  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:35:33.880268  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:35:33.972822  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:35:33.972857  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:35:34.006071  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:35:34.006106  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:35:34.063824  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:35:34.063842  610371 logs.go:123] Gathering logs for kube-apiserver [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96] ...
	I1207 23:35:34.063856  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:36.597353  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:35:36.597745  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:35:36.597800  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:35:36.597854  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:35:36.624901  610371 cri.go:89] found id: "a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:36.624920  610371 cri.go:89] found id: ""
	I1207 23:35:36.624928  610371 logs.go:282] 1 containers: [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96]
	I1207 23:35:36.624984  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:36.629123  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:35:36.629190  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:35:36.657786  610371 cri.go:89] found id: ""
	I1207 23:35:36.657811  610371 logs.go:282] 0 containers: []
	W1207 23:35:36.657819  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:35:36.657826  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:35:36.657889  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:35:36.687422  610371 cri.go:89] found id: ""
	I1207 23:35:36.687448  610371 logs.go:282] 0 containers: []
	W1207 23:35:36.687457  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:35:36.687463  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:35:36.687535  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:35:36.715591  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:36.715619  610371 cri.go:89] found id: ""
	I1207 23:35:36.715631  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:35:36.715697  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:36.720183  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:35:36.720259  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:35:36.747310  610371 cri.go:89] found id: ""
	I1207 23:35:36.747346  610371 logs.go:282] 0 containers: []
	W1207 23:35:36.747358  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:35:36.747366  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:35:36.747419  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:35:36.775096  610371 cri.go:89] found id: "0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:36.775122  610371 cri.go:89] found id: ""
	I1207 23:35:36.775130  610371 logs.go:282] 1 containers: [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b]
	I1207 23:35:36.775179  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:36.779113  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:35:36.779201  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:35:36.806689  610371 cri.go:89] found id: ""
	I1207 23:35:36.806715  610371 logs.go:282] 0 containers: []
	W1207 23:35:36.806724  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:35:36.806732  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:35:36.806794  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:35:36.833714  610371 cri.go:89] found id: ""
	I1207 23:35:36.833743  610371 logs.go:282] 0 containers: []
	W1207 23:35:36.833755  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:35:36.833768  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:35:36.833788  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:35:36.892869  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:35:36.892889  610371 logs.go:123] Gathering logs for kube-apiserver [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96] ...
	I1207 23:35:36.892904  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:36.929341  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:35:36.929379  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:36.958723  610371 logs.go:123] Gathering logs for kube-controller-manager [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b] ...
	I1207 23:35:36.958755  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:36.987042  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:35:36.987069  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:35:37.036685  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:35:37.036721  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:35:37.067894  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:35:37.067928  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:35:37.153427  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:35:37.153465  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1207 23:35:37.768505  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	W1207 23:35:40.269131  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	I1207 23:35:39.685726  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:35:39.686266  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:35:39.686369  610371 kubeadm.go:602] duration metric: took 4m1.634419702s to restartPrimaryControlPlane
	W1207 23:35:39.686435  610371 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1207 23:35:39.686491  610371 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1207 23:35:40.281086  610371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:35:40.296250  610371 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 23:35:40.306090  610371 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1207 23:35:40.306167  610371 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 23:35:40.315128  610371 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 23:35:40.315150  610371 kubeadm.go:158] found existing configuration files:
	
	I1207 23:35:40.315203  610371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1207 23:35:40.324757  610371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1207 23:35:40.324824  610371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1207 23:35:40.333716  610371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1207 23:35:40.343236  610371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1207 23:35:40.343402  610371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1207 23:35:40.353044  610371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1207 23:35:40.361443  610371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1207 23:35:40.361512  610371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1207 23:35:40.370148  610371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1207 23:35:40.379620  610371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1207 23:35:40.379676  610371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1207 23:35:40.390202  610371 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1207 23:35:40.429571  610371 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1207 23:35:40.429637  610371 kubeadm.go:319] [preflight] Running pre-flight checks
	I1207 23:35:40.509163  610371 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1207 23:35:40.509296  610371 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1207 23:35:40.509397  610371 kubeadm.go:319] OS: Linux
	I1207 23:35:40.509462  610371 kubeadm.go:319] CGROUPS_CPU: enabled
	I1207 23:35:40.509544  610371 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1207 23:35:40.509619  610371 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1207 23:35:40.509689  610371 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1207 23:35:40.509789  610371 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1207 23:35:40.509859  610371 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1207 23:35:40.509939  610371 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1207 23:35:40.510020  610371 kubeadm.go:319] CGROUPS_IO: enabled
	I1207 23:35:40.583318  610371 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 23:35:40.583494  610371 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 23:35:40.583648  610371 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1207 23:35:40.590554  610371 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Dec 07 23:35:08 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:08.106389663Z" level=info msg="Created container ce7324d8aac62ae7c0aa0221635e72e96bfcd16abd09a61ad8cef4c7e66ca07f: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-p5lgr/kubernetes-dashboard" id=099c5ed2-69d4-4f69-8f98-53d05fa1b45e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:35:08 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:08.107049574Z" level=info msg="Starting container: ce7324d8aac62ae7c0aa0221635e72e96bfcd16abd09a61ad8cef4c7e66ca07f" id=b188801a-7f0c-43ec-8825-7ffd282d936b name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:35:08 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:08.108893396Z" level=info msg="Started container" PID=1715 containerID=ce7324d8aac62ae7c0aa0221635e72e96bfcd16abd09a61ad8cef4c7e66ca07f description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-p5lgr/kubernetes-dashboard id=b188801a-7f0c-43ec-8825-7ffd282d936b name=/runtime.v1.RuntimeService/StartContainer sandboxID=673d09231e7616d4762786ffd70413008d5bca0a22552eca8c69832d3da4d9ae
	Dec 07 23:35:20 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:20.152962497Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a2cbfe45-c956-468f-be19-9379f658b5c6 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:35:20 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:20.153960897Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=82a1259e-151a-4b02-a098-6630a01f2b58 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:35:20 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:20.154966516Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=1fb9ab7c-b588-453e-9166-ee030bc482b0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:35:20 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:20.155107079Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:35:20 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:20.160146282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:35:20 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:20.160367356Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b78889e500dfba489acee2a4b2fec51114d9d5b72c5e3c7f3c4b1437713ba549/merged/etc/passwd: no such file or directory"
	Dec 07 23:35:20 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:20.160408279Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b78889e500dfba489acee2a4b2fec51114d9d5b72c5e3c7f3c4b1437713ba549/merged/etc/group: no such file or directory"
	Dec 07 23:35:20 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:20.160758969Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:35:20 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:20.20507523Z" level=info msg="Created container 4b439bad9ad85b6dcd7bc9ce303a25519ec7b97359492cd12f2b5f913bfe91d6: kube-system/storage-provisioner/storage-provisioner" id=1fb9ab7c-b588-453e-9166-ee030bc482b0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:35:20 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:20.205750132Z" level=info msg="Starting container: 4b439bad9ad85b6dcd7bc9ce303a25519ec7b97359492cd12f2b5f913bfe91d6" id=644bd256-9257-4470-a78b-dd7d56009617 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:35:20 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:20.207820773Z" level=info msg="Started container" PID=1738 containerID=4b439bad9ad85b6dcd7bc9ce303a25519ec7b97359492cd12f2b5f913bfe91d6 description=kube-system/storage-provisioner/storage-provisioner id=644bd256-9257-4470-a78b-dd7d56009617 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4aaa9811f6442560618bf8c3587c3de8b7e1d770f1e311131198cbd3a8fd9766
	Dec 07 23:35:25 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:25.034234725Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5e5e0198-546a-4956-a03b-9e077fb30431 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:35:25 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:25.035830137Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bad4ac84-ca5b-4162-a1ac-c091b1c96ab6 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:35:25 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:25.037141075Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ksnsk/dashboard-metrics-scraper" id=72b0d225-c733-4974-918a-8dc9988a1121 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:35:25 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:25.037313199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:35:25 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:25.046680405Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:35:25 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:25.047417035Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:35:25 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:25.086166985Z" level=info msg="Created container 8b580c253981d8b8c79bb5abf64e0fc2d20cb1697c918a63e8051b60454e5e75: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ksnsk/dashboard-metrics-scraper" id=72b0d225-c733-4974-918a-8dc9988a1121 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:35:25 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:25.086937566Z" level=info msg="Starting container: 8b580c253981d8b8c79bb5abf64e0fc2d20cb1697c918a63e8051b60454e5e75" id=7c361fcf-d4fc-4919-a7e3-fe91585df4af name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:35:25 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:25.090467033Z" level=info msg="Started container" PID=1753 containerID=8b580c253981d8b8c79bb5abf64e0fc2d20cb1697c918a63e8051b60454e5e75 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ksnsk/dashboard-metrics-scraper id=7c361fcf-d4fc-4919-a7e3-fe91585df4af name=/runtime.v1.RuntimeService/StartContainer sandboxID=fbe2e83f51aa768d059dc865706a5132983064fe63d5f1b171980434174cc148
	Dec 07 23:35:25 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:25.168567623Z" level=info msg="Removing container: 0525b9e594e4b95cd54e7455a340083b94c2548aed57b0c0964ba689f8a815be" id=a58439ef-6794-4281-af79-26c2689ec483 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 07 23:35:25 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:25.182701617Z" level=info msg="Removed container 0525b9e594e4b95cd54e7455a340083b94c2548aed57b0c0964ba689f8a815be: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ksnsk/dashboard-metrics-scraper" id=a58439ef-6794-4281-af79-26c2689ec483 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	8b580c253981d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   2                   fbe2e83f51aa7       dashboard-metrics-scraper-5f989dc9cf-ksnsk       kubernetes-dashboard
	4b439bad9ad85       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   4aaa9811f6442       storage-provisioner                              kube-system
	ce7324d8aac62       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   33 seconds ago      Running             kubernetes-dashboard        0                   673d09231e761       kubernetes-dashboard-8694d4445c-p5lgr            kubernetes-dashboard
	0292e466a3104       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   2f02c60fea14c       busybox                                          default
	e5802a25760f8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           52 seconds ago      Running             coredns                     0                   29eb706c8139b       coredns-5dd5756b68-vv8vq                         kube-system
	3a169be3b9431       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   ce28c70449e99       kindnet-gnv88                                    kube-system
	48fc3f42e00b1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   4aaa9811f6442       storage-provisioner                              kube-system
	7ac02f5275ac1       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           52 seconds ago      Running             kube-proxy                  0                   ff02be16e7894       kube-proxy-vlx4n                                 kube-system
	935941a2cb637       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           54 seconds ago      Running             kube-apiserver              0                   4b91391978d24       kube-apiserver-old-k8s-version-320477            kube-system
	3699584e5acbb       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           54 seconds ago      Running             kube-controller-manager     0                   772d5c5546d5f       kube-controller-manager-old-k8s-version-320477   kube-system
	a21fad74c0501       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           54 seconds ago      Running             kube-scheduler              0                   25083588cc9dc       kube-scheduler-old-k8s-version-320477            kube-system
	9a8b863541694       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           54 seconds ago      Running             etcd                        0                   0d45412d81bb6       etcd-old-k8s-version-320477                      kube-system
	
	
	==> coredns [e5802a25760f8ce1babbff8e5ab0d37753e4c8f06edd2c4595f17533c8d75cb8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36868 - 38104 "HINFO IN 1738503828150855575.3130993462533399884. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021146681s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-320477
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-320477
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=old-k8s-version-320477
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_33_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:33:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-320477
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:35:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:35:19 +0000   Sun, 07 Dec 2025 23:33:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:35:19 +0000   Sun, 07 Dec 2025 23:33:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:35:19 +0000   Sun, 07 Dec 2025 23:33:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:35:19 +0000   Sun, 07 Dec 2025 23:34:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-320477
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                94c12e17-34f4-4521-b4e4-c632ca1c3651
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-5dd5756b68-vv8vq                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-old-k8s-version-320477                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m2s
	  kube-system                 kindnet-gnv88                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-old-k8s-version-320477             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-old-k8s-version-320477    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-vlx4n                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-old-k8s-version-320477             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-ksnsk        0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-p5lgr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 108s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  Starting                 2m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-320477 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-320477 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s (x8 over 2m7s)  kubelet          Node old-k8s-version-320477 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m2s                 kubelet          Node old-k8s-version-320477 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m2s                 kubelet          Node old-k8s-version-320477 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m2s                 kubelet          Node old-k8s-version-320477 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node old-k8s-version-320477 event: Registered Node old-k8s-version-320477 in Controller
	  Normal  NodeReady                95s                  kubelet          Node old-k8s-version-320477 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet          Node old-k8s-version-320477 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node old-k8s-version-320477 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)    kubelet          Node old-k8s-version-320477 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                  node-controller  Node old-k8s-version-320477 event: Registered Node old-k8s-version-320477 in Controller
	
	
	==> dmesg <==
	[  +0.006319] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.495443] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006323] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494714] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006745] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494455] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007157] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493953] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007413] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493695] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007143] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493798] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007702] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493076] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008458] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493060] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008891] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492811] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007996] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493243] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008588] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492559] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008931] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.491699] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.010378] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	
	
	==> etcd [9a8b8635416941bed89621f1e677d2a500361f4b4b1de6dac578300985bf3afc] <==
	{"level":"info","ts":"2025-12-07T23:34:46.626748Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-07T23:34:46.626764Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-07T23:34:46.626658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-12-07T23:34:46.626905Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-12-07T23:34:46.627043Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-07T23:34:46.627077Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-07T23:34:46.629254Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-07T23:34:46.629403Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-07T23:34:46.629457Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-07T23:34:46.629587Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-07T23:34:46.629626Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-07T23:34:47.9187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-07T23:34:47.918742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-07T23:34:47.918756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-07T23:34:47.918767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-12-07T23:34:47.918772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-07T23:34:47.918789Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-12-07T23:34:47.918803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-07T23:34:47.91983Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-320477 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-07T23:34:47.919875Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-07T23:34:47.919923Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-07T23:34:47.920125Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-07T23:34:47.920185Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-07T23:34:47.92202Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-12-07T23:34:47.922266Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 23:35:41 up  2:18,  0 user,  load average: 1.78, 2.07, 1.77
	Linux old-k8s-version-320477 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3a169be3b943116304e4ac0add496f779a883bd6c9970be5183cbf2572dd3b72] <==
	I1207 23:34:49.701207       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 23:34:49.701716       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1207 23:34:49.701978       1 main.go:148] setting mtu 1500 for CNI 
	I1207 23:34:49.702000       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 23:34:49.702037       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T23:34:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 23:34:49.993190       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 23:34:50.040525       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 23:34:50.040628       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 23:34:50.041129       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 23:34:50.441165       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 23:34:50.441193       1 metrics.go:72] Registering metrics
	I1207 23:34:50.441245       1 controller.go:711] "Syncing nftables rules"
	I1207 23:34:59.946430       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:34:59.946482       1 main.go:301] handling current node
	I1207 23:35:09.944958       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:35:09.944995       1 main.go:301] handling current node
	I1207 23:35:19.944836       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:35:19.944871       1 main.go:301] handling current node
	I1207 23:35:29.946770       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:35:29.946813       1 main.go:301] handling current node
	I1207 23:35:39.949524       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:35:39.949559       1 main.go:301] handling current node
	
	
	==> kube-apiserver [935941a2cb637af36928ffb8fe952a120096af31c3a4cf9940d0decdc9dd0ffb] <==
	I1207 23:34:49.033352       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1207 23:34:49.035003       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1207 23:34:49.035072       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1207 23:34:49.035685       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1207 23:34:49.041500       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1207 23:34:49.041548       1 shared_informer.go:318] Caches are synced for configmaps
	I1207 23:34:49.041530       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1207 23:34:49.042413       1 aggregator.go:166] initial CRD sync complete...
	I1207 23:34:49.042468       1 autoregister_controller.go:141] Starting autoregister controller
	I1207 23:34:49.042476       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1207 23:34:49.042485       1 cache.go:39] Caches are synced for autoregister controller
	E1207 23:34:49.060401       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1207 23:34:49.075890       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 23:34:49.936418       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1207 23:34:50.107682       1 controller.go:624] quota admission added evaluator for: namespaces
	I1207 23:34:50.140806       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1207 23:34:50.160476       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 23:34:50.168297       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 23:34:50.175539       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1207 23:34:50.212300       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.200.123"}
	I1207 23:34:50.226620       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.128.239"}
	I1207 23:35:01.729214       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:35:01.729264       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:35:01.791975       1 controller.go:624] quota admission added evaluator for: endpoints
	I1207 23:35:01.815923       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [3699584e5acbb7ce5f69043c7f75a0d7f118a2286a1460827d4e7093b932ea8f] <==
	I1207 23:35:01.847272       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.697µs"
	I1207 23:35:01.851798       1 shared_informer.go:318] Caches are synced for resource quota
	I1207 23:35:01.852157       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.864µs"
	I1207 23:35:01.853700       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.124882ms"
	I1207 23:35:01.853785       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="49.313µs"
	I1207 23:35:01.860034       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="39.012µs"
	I1207 23:35:01.869247       1 shared_informer.go:318] Caches are synced for stateful set
	I1207 23:35:01.895798       1 shared_informer.go:318] Caches are synced for disruption
	I1207 23:35:01.910319       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1207 23:35:01.940172       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1207 23:35:01.940185       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1207 23:35:01.941454       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1207 23:35:01.942552       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1207 23:35:02.263438       1 shared_informer.go:318] Caches are synced for garbage collector
	I1207 23:35:02.289806       1 shared_informer.go:318] Caches are synced for garbage collector
	I1207 23:35:02.289857       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1207 23:35:05.118433       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.5µs"
	I1207 23:35:06.123536       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.651µs"
	I1207 23:35:07.128292       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.817µs"
	I1207 23:35:08.136932       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.667945ms"
	I1207 23:35:08.137153       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.754µs"
	I1207 23:35:24.328488       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.269602ms"
	I1207 23:35:24.328606       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.214µs"
	I1207 23:35:25.183126       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.23µs"
	I1207 23:35:32.152319       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.659µs"
	
	
	==> kube-proxy [7ac02f5275ac14463e5fd58a2169b7fdf2d51dd9e8b7dc1f1fab2b5d1e42f235] <==
	I1207 23:34:49.483237       1 server_others.go:69] "Using iptables proxy"
	I1207 23:34:49.493194       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1207 23:34:49.511968       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:34:49.514868       1 server_others.go:152] "Using iptables Proxier"
	I1207 23:34:49.514910       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1207 23:34:49.514921       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1207 23:34:49.514962       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1207 23:34:49.515223       1 server.go:846] "Version info" version="v1.28.0"
	I1207 23:34:49.515282       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:34:49.516127       1 config.go:97] "Starting endpoint slice config controller"
	I1207 23:34:49.516658       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1207 23:34:49.516248       1 config.go:188] "Starting service config controller"
	I1207 23:34:49.516814       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1207 23:34:49.516586       1 config.go:315] "Starting node config controller"
	I1207 23:34:49.516864       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1207 23:34:49.617660       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1207 23:34:49.618451       1 shared_informer.go:318] Caches are synced for service config
	I1207 23:34:49.620063       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [a21fad74c0501472726aa964a8eae6cf6097ab2ad2cc7f048b4b2e442c8ec636] <==
	I1207 23:34:47.292890       1 serving.go:348] Generated self-signed cert in-memory
	W1207 23:34:48.967539       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 23:34:48.967586       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 23:34:48.967603       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 23:34:48.967614       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 23:34:49.015453       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1207 23:34:49.015515       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:34:49.019249       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:34:49.019286       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1207 23:34:49.021316       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1207 23:34:49.021428       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1207 23:34:49.119490       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 07 23:35:01 old-k8s-version-320477 kubelet[727]: I1207 23:35:01.839733     727 topology_manager.go:215] "Topology Admit Handler" podUID="1c93ee9e-303c-45f3-85db-45aa00340c87" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-ksnsk"
	Dec 07 23:35:01 old-k8s-version-320477 kubelet[727]: I1207 23:35:01.956458     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1c93ee9e-303c-45f3-85db-45aa00340c87-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-ksnsk\" (UID: \"1c93ee9e-303c-45f3-85db-45aa00340c87\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ksnsk"
	Dec 07 23:35:01 old-k8s-version-320477 kubelet[727]: I1207 23:35:01.956505     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/990e3703-ccdc-419b-9739-4009d4eef45d-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-p5lgr\" (UID: \"990e3703-ccdc-419b-9739-4009d4eef45d\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-p5lgr"
	Dec 07 23:35:01 old-k8s-version-320477 kubelet[727]: I1207 23:35:01.956537     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5j4b\" (UniqueName: \"kubernetes.io/projected/990e3703-ccdc-419b-9739-4009d4eef45d-kube-api-access-h5j4b\") pod \"kubernetes-dashboard-8694d4445c-p5lgr\" (UID: \"990e3703-ccdc-419b-9739-4009d4eef45d\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-p5lgr"
	Dec 07 23:35:01 old-k8s-version-320477 kubelet[727]: I1207 23:35:01.956698     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjpp7\" (UniqueName: \"kubernetes.io/projected/1c93ee9e-303c-45f3-85db-45aa00340c87-kube-api-access-mjpp7\") pod \"dashboard-metrics-scraper-5f989dc9cf-ksnsk\" (UID: \"1c93ee9e-303c-45f3-85db-45aa00340c87\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ksnsk"
	Dec 07 23:35:05 old-k8s-version-320477 kubelet[727]: I1207 23:35:05.106056     727 scope.go:117] "RemoveContainer" containerID="a6a2217224e189b80aa48bf8f1fb1a2f648cc2077b29b228c6988af4b9496ec8"
	Dec 07 23:35:06 old-k8s-version-320477 kubelet[727]: I1207 23:35:06.110001     727 scope.go:117] "RemoveContainer" containerID="a6a2217224e189b80aa48bf8f1fb1a2f648cc2077b29b228c6988af4b9496ec8"
	Dec 07 23:35:06 old-k8s-version-320477 kubelet[727]: I1207 23:35:06.110184     727 scope.go:117] "RemoveContainer" containerID="0525b9e594e4b95cd54e7455a340083b94c2548aed57b0c0964ba689f8a815be"
	Dec 07 23:35:06 old-k8s-version-320477 kubelet[727]: E1207 23:35:06.110586     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ksnsk_kubernetes-dashboard(1c93ee9e-303c-45f3-85db-45aa00340c87)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ksnsk" podUID="1c93ee9e-303c-45f3-85db-45aa00340c87"
	Dec 07 23:35:07 old-k8s-version-320477 kubelet[727]: I1207 23:35:07.114708     727 scope.go:117] "RemoveContainer" containerID="0525b9e594e4b95cd54e7455a340083b94c2548aed57b0c0964ba689f8a815be"
	Dec 07 23:35:07 old-k8s-version-320477 kubelet[727]: E1207 23:35:07.115061     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ksnsk_kubernetes-dashboard(1c93ee9e-303c-45f3-85db-45aa00340c87)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ksnsk" podUID="1c93ee9e-303c-45f3-85db-45aa00340c87"
	Dec 07 23:35:08 old-k8s-version-320477 kubelet[727]: I1207 23:35:08.130608     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-p5lgr" podStartSLOduration=1.234558298 podCreationTimestamp="2025-12-07 23:35:01 +0000 UTC" firstStartedPulling="2025-12-07 23:35:02.167571226 +0000 UTC m=+16.238450781" lastFinishedPulling="2025-12-07 23:35:08.063552402 +0000 UTC m=+22.134431960" observedRunningTime="2025-12-07 23:35:08.129995523 +0000 UTC m=+22.200875083" watchObservedRunningTime="2025-12-07 23:35:08.130539477 +0000 UTC m=+22.201419037"
	Dec 07 23:35:12 old-k8s-version-320477 kubelet[727]: I1207 23:35:12.141668     727 scope.go:117] "RemoveContainer" containerID="0525b9e594e4b95cd54e7455a340083b94c2548aed57b0c0964ba689f8a815be"
	Dec 07 23:35:12 old-k8s-version-320477 kubelet[727]: E1207 23:35:12.142086     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ksnsk_kubernetes-dashboard(1c93ee9e-303c-45f3-85db-45aa00340c87)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ksnsk" podUID="1c93ee9e-303c-45f3-85db-45aa00340c87"
	Dec 07 23:35:20 old-k8s-version-320477 kubelet[727]: I1207 23:35:20.152159     727 scope.go:117] "RemoveContainer" containerID="48fc3f42e00b15030c847b6ceb34f41299df9ffdebfb2d4eff9f587834a6f337"
	Dec 07 23:35:25 old-k8s-version-320477 kubelet[727]: I1207 23:35:25.033585     727 scope.go:117] "RemoveContainer" containerID="0525b9e594e4b95cd54e7455a340083b94c2548aed57b0c0964ba689f8a815be"
	Dec 07 23:35:25 old-k8s-version-320477 kubelet[727]: I1207 23:35:25.167147     727 scope.go:117] "RemoveContainer" containerID="0525b9e594e4b95cd54e7455a340083b94c2548aed57b0c0964ba689f8a815be"
	Dec 07 23:35:25 old-k8s-version-320477 kubelet[727]: I1207 23:35:25.167416     727 scope.go:117] "RemoveContainer" containerID="8b580c253981d8b8c79bb5abf64e0fc2d20cb1697c918a63e8051b60454e5e75"
	Dec 07 23:35:25 old-k8s-version-320477 kubelet[727]: E1207 23:35:25.167801     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ksnsk_kubernetes-dashboard(1c93ee9e-303c-45f3-85db-45aa00340c87)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ksnsk" podUID="1c93ee9e-303c-45f3-85db-45aa00340c87"
	Dec 07 23:35:32 old-k8s-version-320477 kubelet[727]: I1207 23:35:32.142548     727 scope.go:117] "RemoveContainer" containerID="8b580c253981d8b8c79bb5abf64e0fc2d20cb1697c918a63e8051b60454e5e75"
	Dec 07 23:35:32 old-k8s-version-320477 kubelet[727]: E1207 23:35:32.142901     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ksnsk_kubernetes-dashboard(1c93ee9e-303c-45f3-85db-45aa00340c87)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ksnsk" podUID="1c93ee9e-303c-45f3-85db-45aa00340c87"
	Dec 07 23:35:38 old-k8s-version-320477 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 07 23:35:38 old-k8s-version-320477 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 07 23:35:38 old-k8s-version-320477 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 07 23:35:38 old-k8s-version-320477 systemd[1]: kubelet.service: Consumed 1.538s CPU time.
	
	
	==> kubernetes-dashboard [ce7324d8aac62ae7c0aa0221635e72e96bfcd16abd09a61ad8cef4c7e66ca07f] <==
	2025/12/07 23:35:08 Starting overwatch
	2025/12/07 23:35:08 Using namespace: kubernetes-dashboard
	2025/12/07 23:35:08 Using in-cluster config to connect to apiserver
	2025/12/07 23:35:08 Using secret token for csrf signing
	2025/12/07 23:35:08 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/07 23:35:08 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/07 23:35:08 Successful initial request to the apiserver, version: v1.28.0
	2025/12/07 23:35:08 Generating JWE encryption key
	2025/12/07 23:35:08 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/07 23:35:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/07 23:35:08 Initializing JWE encryption key from synchronized object
	2025/12/07 23:35:08 Creating in-cluster Sidecar client
	2025/12/07 23:35:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/07 23:35:08 Serving insecurely on HTTP port: 9090
	2025/12/07 23:35:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [48fc3f42e00b15030c847b6ceb34f41299df9ffdebfb2d4eff9f587834a6f337] <==
	I1207 23:34:49.442821       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1207 23:35:19.446106       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [4b439bad9ad85b6dcd7bc9ce303a25519ec7b97359492cd12f2b5f913bfe91d6] <==
	I1207 23:35:20.220715       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 23:35:20.237541       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 23:35:20.237622       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1207 23:35:37.633470       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 23:35:37.633538       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9ac3ae20-044f-4c8f-a42d-d1ab1a68535f", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-320477_02b14718-4d0e-461e-8c9d-be5500cb1767 became leader
	I1207 23:35:37.633698       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-320477_02b14718-4d0e-461e-8c9d-be5500cb1767!
	I1207 23:35:37.733990       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-320477_02b14718-4d0e-461e-8c9d-be5500cb1767!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-320477 -n old-k8s-version-320477
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-320477 -n old-k8s-version-320477: exit status 2 (367.486313ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-320477 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-320477
helpers_test.go:243: (dbg) docker inspect old-k8s-version-320477:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "06913e870114853a6134a49eb080ad75cbade550da3920f3ac120370ad522f60",
	        "Created": "2025-12-07T23:33:24.406627697Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 648013,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:34:37.904362181Z",
	            "FinishedAt": "2025-12-07T23:34:36.902588342Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/06913e870114853a6134a49eb080ad75cbade550da3920f3ac120370ad522f60/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/06913e870114853a6134a49eb080ad75cbade550da3920f3ac120370ad522f60/hostname",
	        "HostsPath": "/var/lib/docker/containers/06913e870114853a6134a49eb080ad75cbade550da3920f3ac120370ad522f60/hosts",
	        "LogPath": "/var/lib/docker/containers/06913e870114853a6134a49eb080ad75cbade550da3920f3ac120370ad522f60/06913e870114853a6134a49eb080ad75cbade550da3920f3ac120370ad522f60-json.log",
	        "Name": "/old-k8s-version-320477",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-320477:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-320477",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "06913e870114853a6134a49eb080ad75cbade550da3920f3ac120370ad522f60",
	                "LowerDir": "/var/lib/docker/overlay2/acd9d1d66636fbbdfd34477ab909bc56ba8678951aa24f32a68daf160b304ed3-init/diff:/var/lib/docker/overlay2/d2e9c5481c0f5ed3745e4b3c85b207e8e3f273f5a1d285f7bc7bfa20976ad16e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/acd9d1d66636fbbdfd34477ab909bc56ba8678951aa24f32a68daf160b304ed3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/acd9d1d66636fbbdfd34477ab909bc56ba8678951aa24f32a68daf160b304ed3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/acd9d1d66636fbbdfd34477ab909bc56ba8678951aa24f32a68daf160b304ed3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-320477",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-320477/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-320477",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-320477",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-320477",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9becde4ef7b99a441a965bc7e1f782c121ec76992b206c54733d22ae271b06e3",
	            "SandboxKey": "/var/run/docker/netns/9becde4ef7b9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-320477": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "79f54ad63e607736183a174ecfbd71671c6240b2d3072bbde0376d130c69013c",
	                    "EndpointID": "90fbe59cab7277486e368ac06742dccfdba4f352e228d2db974734f5d862382a",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "f2:d1:8f:66:58:4f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-320477",
	                        "06913e870114"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-320477 -n old-k8s-version-320477
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-320477 -n old-k8s-version-320477: exit status 2 (354.083501ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-320477 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-320477 logs -n 25: (1.287800781s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-600852 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo containerd config dump                                                                                                                                                                                                  │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ ssh     │ -p cilium-600852 sudo crio config                                                                                                                                                                                                             │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │                     │
	│ delete  │ -p cilium-600852                                                                                                                                                                                                                              │ cilium-600852          │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:33 UTC │
	│ start   │ -p old-k8s-version-320477 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-320477 │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:34 UTC │
	│ start   │ -p cert-expiration-612608 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-612608 │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:33 UTC │
	│ delete  │ -p cert-expiration-612608                                                                                                                                                                                                                     │ cert-expiration-612608 │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:33 UTC │
	│ start   │ -p no-preload-313006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-313006      │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:34 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-320477 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-320477 │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │                     │
	│ stop    │ -p old-k8s-version-320477 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-320477 │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:34 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-320477 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-320477 │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:34 UTC │
	│ start   │ -p old-k8s-version-320477 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-320477 │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:35 UTC │
	│ delete  │ -p stopped-upgrade-604160                                                                                                                                                                                                                     │ stopped-upgrade-604160 │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:34 UTC │
	│ start   │ -p embed-certs-654118 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-654118     │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-313006 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-313006      │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │                     │
	│ stop    │ -p no-preload-313006 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-313006      │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:35 UTC │
	│ addons  │ enable dashboard -p no-preload-313006 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-313006      │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p no-preload-313006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-313006      │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ image   │ old-k8s-version-320477 image list --format=json                                                                                                                                                                                               │ old-k8s-version-320477 │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ pause   │ -p old-k8s-version-320477 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-320477 │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:35:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:35:11.948416  656318 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:35:11.948543  656318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:35:11.948555  656318 out.go:374] Setting ErrFile to fd 2...
	I1207 23:35:11.948562  656318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:35:11.948862  656318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:35:11.949446  656318 out.go:368] Setting JSON to false
	I1207 23:35:11.951084  656318 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8256,"bootTime":1765142256,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:35:11.951163  656318 start.go:143] virtualization: kvm guest
	I1207 23:35:11.953338  656318 out.go:179] * [no-preload-313006] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:35:11.954572  656318 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:35:11.954581  656318 notify.go:221] Checking for updates...
	I1207 23:35:11.956967  656318 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:35:11.958450  656318 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:35:11.959838  656318 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:35:11.961173  656318 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:35:11.962510  656318 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:35:11.964222  656318 config.go:182] Loaded profile config "no-preload-313006": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:35:11.965018  656318 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:35:11.990062  656318 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:35:11.990190  656318 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:35:12.053881  656318 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-07 23:35:12.043233543 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:35:12.054041  656318 docker.go:319] overlay module found
	I1207 23:35:12.058529  656318 out.go:179] * Using the docker driver based on existing profile
	I1207 23:35:12.060005  656318 start.go:309] selected driver: docker
	I1207 23:35:12.060027  656318 start.go:927] validating driver "docker" against &{Name:no-preload-313006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-313006 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:35:12.060153  656318 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:35:12.060829  656318 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:35:12.120195  656318 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-07 23:35:12.110157918 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:35:12.120546  656318 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:35:12.120583  656318 cni.go:84] Creating CNI manager for ""
	I1207 23:35:12.120656  656318 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:35:12.120720  656318 start.go:353] cluster config:
	{Name:no-preload-313006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-313006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:35:12.123832  656318 out.go:179] * Starting "no-preload-313006" primary control-plane node in "no-preload-313006" cluster
	I1207 23:35:12.125168  656318 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:35:12.126482  656318 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:35:12.128060  656318 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1207 23:35:12.128163  656318 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/config.json ...
	I1207 23:35:12.128184  656318 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:35:12.128409  656318 cache.go:107] acquiring lock: {Name:mk35f35d02b51e73648018346caa8577bcb02423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:35:12.128478  656318 cache.go:107] acquiring lock: {Name:mk6e7f82161fd3b4764748eae2defc53fa3a2d89 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:35:12.128505  656318 cache.go:107] acquiring lock: {Name:mkc02ccbaf1950fb11a48894c61699039caba7ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:35:12.128557  656318 cache.go:115] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1207 23:35:12.128419  656318 cache.go:107] acquiring lock: {Name:mk9827fb3e41345bba396b2d0abebc9c76ae1b5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:35:12.128572  656318 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 178.599µs
	I1207 23:35:12.128593  656318 cache.go:115] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1207 23:35:12.128599  656318 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1207 23:35:12.128556  656318 cache.go:107] acquiring lock: {Name:mk073566b0fe2be152587ae35afb0e7b5e91cd92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:35:12.128607  656318 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 147.567µs
	I1207 23:35:12.128625  656318 cache.go:115] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1207 23:35:12.128612  656318 cache.go:107] acquiring lock: {Name:mke7b5e65769096d2da605e337724f9c23cd0a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:35:12.128625  656318 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1207 23:35:12.128594  656318 cache.go:115] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1207 23:35:12.128634  656318 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 227.627µs
	I1207 23:35:12.128645  656318 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1207 23:35:12.128618  656318 cache.go:107] acquiring lock: {Name:mkbd6b49f7665e4f1e59327a6638af64accfbd8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:35:12.128647  656318 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 151.513µs
	I1207 23:35:12.128656  656318 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1207 23:35:12.128674  656318 cache.go:115] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1207 23:35:12.128675  656318 cache.go:115] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1207 23:35:12.128685  656318 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 70.765µs
	I1207 23:35:12.128689  656318 cache.go:115] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1207 23:35:12.128683  656318 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 74.871µs
	I1207 23:35:12.128695  656318 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1207 23:35:12.128698  656318 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1207 23:35:12.128698  656318 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 198.957µs
	I1207 23:35:12.128706  656318 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1207 23:35:12.128749  656318 cache.go:107] acquiring lock: {Name:mk187eff8ce17bd71a4f3c7c012208c9c4122014 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:35:12.129000  656318 cache.go:115] /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1207 23:35:12.129023  656318 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 322.927µs
	I1207 23:35:12.129035  656318 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1207 23:35:12.129044  656318 cache.go:87] Successfully saved all images to host disk.
	I1207 23:35:12.153514  656318 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:35:12.153537  656318 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:35:12.153559  656318 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:35:12.153597  656318 start.go:360] acquireMachinesLock for no-preload-313006: {Name:mk5eb3348861def558ca942a9632e734d86e74b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:35:12.153666  656318 start.go:364] duration metric: took 48.816µs to acquireMachinesLock for "no-preload-313006"
	I1207 23:35:12.153689  656318 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:35:12.153698  656318 fix.go:54] fixHost starting: 
	I1207 23:35:12.153990  656318 cli_runner.go:164] Run: docker container inspect no-preload-313006 --format={{.State.Status}}
	I1207 23:35:12.176776  656318 fix.go:112] recreateIfNeeded on no-preload-313006: state=Stopped err=<nil>
	W1207 23:35:12.176815  656318 fix.go:138] unexpected machine state, will restart: <nil>
	W1207 23:35:07.779077  647748 pod_ready.go:104] pod "coredns-5dd5756b68-vv8vq" is not "Ready", error: <nil>
	W1207 23:35:10.277391  647748 pod_ready.go:104] pod "coredns-5dd5756b68-vv8vq" is not "Ready", error: <nil>
	W1207 23:35:12.278306  647748 pod_ready.go:104] pod "coredns-5dd5756b68-vv8vq" is not "Ready", error: <nil>
	I1207 23:35:08.536012  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:35:08.536492  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:35:08.536550  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:35:08.536603  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:35:08.562895  610371 cri.go:89] found id: "a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:08.562919  610371 cri.go:89] found id: ""
	I1207 23:35:08.562931  610371 logs.go:282] 1 containers: [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96]
	I1207 23:35:08.562983  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:08.567203  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:35:08.567279  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:35:08.595795  610371 cri.go:89] found id: ""
	I1207 23:35:08.595824  610371 logs.go:282] 0 containers: []
	W1207 23:35:08.595835  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:35:08.595843  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:35:08.595907  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:35:08.622786  610371 cri.go:89] found id: ""
	I1207 23:35:08.622815  610371 logs.go:282] 0 containers: []
	W1207 23:35:08.622827  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:35:08.622836  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:35:08.622892  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:35:08.652163  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:08.652186  610371 cri.go:89] found id: ""
	I1207 23:35:08.652194  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:35:08.652257  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:08.656318  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:35:08.656413  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:35:08.683507  610371 cri.go:89] found id: ""
	I1207 23:35:08.683535  610371 logs.go:282] 0 containers: []
	W1207 23:35:08.683546  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:35:08.683553  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:35:08.683622  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:35:08.711226  610371 cri.go:89] found id: "0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:08.711248  610371 cri.go:89] found id: ""
	I1207 23:35:08.711258  610371 logs.go:282] 1 containers: [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b]
	I1207 23:35:08.711322  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:08.715234  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:35:08.715291  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:35:08.741725  610371 cri.go:89] found id: ""
	I1207 23:35:08.741749  610371 logs.go:282] 0 containers: []
	W1207 23:35:08.741757  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:35:08.741763  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:35:08.741819  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:35:08.769008  610371 cri.go:89] found id: ""
	I1207 23:35:08.769038  610371 logs.go:282] 0 containers: []
	W1207 23:35:08.769049  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:35:08.769062  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:35:08.769080  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:35:08.800220  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:35:08.800254  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:35:08.891250  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:35:08.891294  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:35:08.924849  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:35:08.924883  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:35:08.980767  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:35:08.980807  610371 logs.go:123] Gathering logs for kube-apiserver [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96] ...
	I1207 23:35:08.980824  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:09.010590  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:35:09.010620  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:09.037911  610371 logs.go:123] Gathering logs for kube-controller-manager [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b] ...
	I1207 23:35:09.037940  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:09.064244  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:35:09.064271  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:35:11.618410  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:35:11.618783  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:35:11.618838  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:35:11.618885  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:35:11.649406  610371 cri.go:89] found id: "a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:11.649432  610371 cri.go:89] found id: ""
	I1207 23:35:11.649443  610371 logs.go:282] 1 containers: [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96]
	I1207 23:35:11.649503  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:11.653924  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:35:11.653989  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:35:11.682619  610371 cri.go:89] found id: ""
	I1207 23:35:11.682649  610371 logs.go:282] 0 containers: []
	W1207 23:35:11.682661  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:35:11.682670  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:35:11.682723  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:35:11.713785  610371 cri.go:89] found id: ""
	I1207 23:35:11.713809  610371 logs.go:282] 0 containers: []
	W1207 23:35:11.713817  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:35:11.713825  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:35:11.713885  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:35:11.743249  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:11.743272  610371 cri.go:89] found id: ""
	I1207 23:35:11.743283  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:35:11.743345  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:11.747570  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:35:11.747629  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:35:11.775069  610371 cri.go:89] found id: ""
	I1207 23:35:11.775097  610371 logs.go:282] 0 containers: []
	W1207 23:35:11.775106  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:35:11.775115  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:35:11.775176  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:35:11.806376  610371 cri.go:89] found id: "0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:11.806395  610371 cri.go:89] found id: ""
	I1207 23:35:11.806404  610371 logs.go:282] 1 containers: [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b]
	I1207 23:35:11.806462  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:11.810858  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:35:11.810937  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:35:11.840493  610371 cri.go:89] found id: ""
	I1207 23:35:11.840517  610371 logs.go:282] 0 containers: []
	W1207 23:35:11.840526  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:35:11.840531  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:35:11.840592  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:35:11.870124  610371 cri.go:89] found id: ""
	I1207 23:35:11.870152  610371 logs.go:282] 0 containers: []
	W1207 23:35:11.870165  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:35:11.870174  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:35:11.870186  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:35:11.970358  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:35:11.970392  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:35:12.005052  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:35:12.005085  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:35:12.074835  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:35:12.074860  610371 logs.go:123] Gathering logs for kube-apiserver [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96] ...
	I1207 23:35:12.074878  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:12.113612  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:35:12.113649  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:12.145273  610371 logs.go:123] Gathering logs for kube-controller-manager [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b] ...
	I1207 23:35:12.145305  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:12.180088  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:35:12.180128  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:35:12.236007  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:35:12.236047  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1207 23:35:11.768724  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	W1207 23:35:14.267844  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	I1207 23:35:12.178474  656318 out.go:252] * Restarting existing docker container for "no-preload-313006" ...
	I1207 23:35:12.178568  656318 cli_runner.go:164] Run: docker start no-preload-313006
	I1207 23:35:12.438308  656318 cli_runner.go:164] Run: docker container inspect no-preload-313006 --format={{.State.Status}}
	I1207 23:35:12.457155  656318 kic.go:430] container "no-preload-313006" state is running.
	I1207 23:35:12.457571  656318 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-313006
	I1207 23:35:12.476733  656318 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/config.json ...
	I1207 23:35:12.476989  656318 machine.go:94] provisionDockerMachine start ...
	I1207 23:35:12.477103  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:35:12.496259  656318 main.go:143] libmachine: Using SSH client type: native
	I1207 23:35:12.496522  656318 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1207 23:35:12.496538  656318 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:35:12.497091  656318 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48192->127.0.0.1:33448: read: connection reset by peer
	I1207 23:35:15.629483  656318 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-313006
	
	I1207 23:35:15.629515  656318 ubuntu.go:182] provisioning hostname "no-preload-313006"
	I1207 23:35:15.629577  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:35:15.648744  656318 main.go:143] libmachine: Using SSH client type: native
	I1207 23:35:15.649071  656318 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1207 23:35:15.649100  656318 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-313006 && echo "no-preload-313006" | sudo tee /etc/hostname
	I1207 23:35:15.788999  656318 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-313006
	
	I1207 23:35:15.789079  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:35:15.808467  656318 main.go:143] libmachine: Using SSH client type: native
	I1207 23:35:15.808737  656318 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1207 23:35:15.808767  656318 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-313006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-313006/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-313006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:35:15.938166  656318 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:35:15.938209  656318 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:35:15.938239  656318 ubuntu.go:190] setting up certificates
	I1207 23:35:15.938256  656318 provision.go:84] configureAuth start
	I1207 23:35:15.938341  656318 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-313006
	I1207 23:35:15.956774  656318 provision.go:143] copyHostCerts
	I1207 23:35:15.956833  656318 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:35:15.956841  656318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:35:15.956910  656318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:35:15.956998  656318 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:35:15.957006  656318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:35:15.957032  656318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:35:15.957082  656318 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:35:15.957089  656318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:35:15.957111  656318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:35:15.957165  656318 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.no-preload-313006 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-313006]
	I1207 23:35:16.153011  656318 provision.go:177] copyRemoteCerts
	I1207 23:35:16.153084  656318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:35:16.153146  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:35:16.172313  656318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:35:16.265958  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1207 23:35:16.284340  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 23:35:16.302279  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:35:16.320037  656318 provision.go:87] duration metric: took 381.764174ms to configureAuth
	I1207 23:35:16.320062  656318 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:35:16.320237  656318 config.go:182] Loaded profile config "no-preload-313006": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:35:16.320386  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:35:16.339139  656318 main.go:143] libmachine: Using SSH client type: native
	I1207 23:35:16.339392  656318 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1207 23:35:16.339417  656318 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:35:16.651730  656318 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:35:16.651762  656318 machine.go:97] duration metric: took 4.174751851s to provisionDockerMachine
	I1207 23:35:16.651777  656318 start.go:293] postStartSetup for "no-preload-313006" (driver="docker")
	I1207 23:35:16.651805  656318 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:35:16.651874  656318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:35:16.651928  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:35:16.672055  656318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:35:16.767166  656318 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:35:16.770993  656318 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:35:16.771023  656318 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:35:16.771036  656318 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:35:16.771105  656318 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:35:16.771209  656318 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:35:16.771336  656318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:35:16.779720  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:35:16.797612  656318 start.go:296] duration metric: took 145.818898ms for postStartSetup
	I1207 23:35:16.797700  656318 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:35:16.797760  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:35:16.816136  656318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:35:16.907681  656318 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:35:16.912551  656318 fix.go:56] duration metric: took 4.758844234s for fixHost
	I1207 23:35:16.912579  656318 start.go:83] releasing machines lock for "no-preload-313006", held for 4.758900576s
	I1207 23:35:16.912658  656318 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-313006
	I1207 23:35:16.931785  656318 ssh_runner.go:195] Run: cat /version.json
	I1207 23:35:16.931808  656318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:35:16.931834  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:35:16.931868  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	W1207 23:35:14.777930  647748 pod_ready.go:104] pod "coredns-5dd5756b68-vv8vq" is not "Ready", error: <nil>
	W1207 23:35:16.778148  647748 pod_ready.go:104] pod "coredns-5dd5756b68-vv8vq" is not "Ready", error: <nil>
	I1207 23:35:14.770590  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:35:14.770979  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:35:14.771036  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:35:14.771099  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:35:14.799519  610371 cri.go:89] found id: "a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:14.799546  610371 cri.go:89] found id: ""
	I1207 23:35:14.799554  610371 logs.go:282] 1 containers: [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96]
	I1207 23:35:14.799612  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:14.803831  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:35:14.803893  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:35:14.831634  610371 cri.go:89] found id: ""
	I1207 23:35:14.831659  610371 logs.go:282] 0 containers: []
	W1207 23:35:14.831668  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:35:14.831674  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:35:14.831724  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:35:14.859086  610371 cri.go:89] found id: ""
	I1207 23:35:14.859112  610371 logs.go:282] 0 containers: []
	W1207 23:35:14.859123  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:35:14.859131  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:35:14.859194  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:35:14.886672  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:14.886698  610371 cri.go:89] found id: ""
	I1207 23:35:14.886708  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:35:14.886778  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:14.890772  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:35:14.890838  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:35:14.918055  610371 cri.go:89] found id: ""
	I1207 23:35:14.918083  610371 logs.go:282] 0 containers: []
	W1207 23:35:14.918094  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:35:14.918103  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:35:14.918166  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:35:14.945022  610371 cri.go:89] found id: "0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:14.945039  610371 cri.go:89] found id: ""
	I1207 23:35:14.945047  610371 logs.go:282] 1 containers: [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b]
	I1207 23:35:14.945105  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:14.949226  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:35:14.949288  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:35:14.977021  610371 cri.go:89] found id: ""
	I1207 23:35:14.977056  610371 logs.go:282] 0 containers: []
	W1207 23:35:14.977068  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:35:14.977077  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:35:14.977145  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:35:15.004617  610371 cri.go:89] found id: ""
	I1207 23:35:15.004645  610371 logs.go:282] 0 containers: []
	W1207 23:35:15.004659  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:35:15.004670  610371 logs.go:123] Gathering logs for kube-apiserver [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96] ...
	I1207 23:35:15.004683  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:15.035811  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:35:15.035845  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:15.063487  610371 logs.go:123] Gathering logs for kube-controller-manager [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b] ...
	I1207 23:35:15.063518  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:15.090238  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:35:15.090271  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:35:15.142350  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:35:15.142384  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:35:15.173149  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:35:15.173177  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:35:15.258314  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:35:15.258368  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:35:15.292647  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:35:15.292682  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:35:15.350650  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:35:16.952030  656318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:35:16.952207  656318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:35:17.100538  656318 ssh_runner.go:195] Run: systemctl --version
	I1207 23:35:17.107283  656318 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:35:17.142202  656318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:35:17.146927  656318 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:35:17.146987  656318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:35:17.155750  656318 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 23:35:17.155770  656318 start.go:496] detecting cgroup driver to use...
	I1207 23:35:17.155808  656318 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:35:17.155848  656318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:35:17.170400  656318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:35:17.182815  656318 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:35:17.182868  656318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:35:17.197759  656318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:35:17.210593  656318 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:35:17.296103  656318 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:35:17.380613  656318 docker.go:234] disabling docker service ...
	I1207 23:35:17.380687  656318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:35:17.395177  656318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:35:17.407843  656318 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:35:17.494399  656318 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:35:17.577708  656318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:35:17.590916  656318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:35:17.605817  656318 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:35:17.605875  656318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:17.614997  656318 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:35:17.615071  656318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:17.624281  656318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:17.633698  656318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:17.643425  656318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:35:17.653185  656318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:17.663667  656318 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:17.672863  656318 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:17.683221  656318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:35:17.691500  656318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:35:17.699401  656318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:35:17.783727  656318 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:35:17.936763  656318 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:35:17.936836  656318 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:35:17.942075  656318 start.go:564] Will wait 60s for crictl version
	I1207 23:35:17.942150  656318 ssh_runner.go:195] Run: which crictl
	I1207 23:35:17.946683  656318 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:35:17.975279  656318 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:35:17.975381  656318 ssh_runner.go:195] Run: crio --version
	I1207 23:35:18.006830  656318 ssh_runner.go:195] Run: crio --version
	I1207 23:35:18.040015  656318 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1207 23:35:18.041321  656318 cli_runner.go:164] Run: docker network inspect no-preload-313006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:35:18.061342  656318 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1207 23:35:18.066102  656318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:35:18.078024  656318 kubeadm.go:884] updating cluster {Name:no-preload-313006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-313006 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:35:18.078159  656318 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1207 23:35:18.078214  656318 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:35:18.112713  656318 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:35:18.112734  656318 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:35:18.112742  656318 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1207 23:35:18.112881  656318 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-313006 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-313006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:35:18.112966  656318 ssh_runner.go:195] Run: crio config
	I1207 23:35:18.164942  656318 cni.go:84] Creating CNI manager for ""
	I1207 23:35:18.164971  656318 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:35:18.164988  656318 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 23:35:18.165020  656318 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-313006 NodeName:no-preload-313006 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:35:18.165188  656318 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-313006"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:35:18.165268  656318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1207 23:35:18.174644  656318 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:35:18.174720  656318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 23:35:18.183368  656318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1207 23:35:18.197285  656318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1207 23:35:18.211469  656318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1207 23:35:18.226652  656318 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1207 23:35:18.230797  656318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:35:18.242628  656318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:35:18.327553  656318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:35:18.355034  656318 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006 for IP: 192.168.85.2
	I1207 23:35:18.355061  656318 certs.go:195] generating shared ca certs ...
	I1207 23:35:18.355087  656318 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:35:18.355231  656318 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:35:18.355270  656318 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:35:18.355280  656318 certs.go:257] generating profile certs ...
	I1207 23:35:18.355400  656318 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/client.key
	I1207 23:35:18.355469  656318 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/apiserver.key.717a55f9
	I1207 23:35:18.355506  656318 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/proxy-client.key
	I1207 23:35:18.355630  656318 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:35:18.355672  656318 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:35:18.355686  656318 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:35:18.355716  656318 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:35:18.355753  656318 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:35:18.355787  656318 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:35:18.355833  656318 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:35:18.356409  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:35:18.377099  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:35:18.397963  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:35:18.420060  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:35:18.446621  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1207 23:35:18.468058  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 23:35:18.486707  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:35:18.505018  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 23:35:18.523682  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:35:18.542031  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:35:18.560957  656318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:35:18.580157  656318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:35:18.593339  656318 ssh_runner.go:195] Run: openssl version
	I1207 23:35:18.599350  656318 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:35:18.606639  656318 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:35:18.614063  656318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:35:18.617803  656318 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:35:18.617866  656318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:35:18.653512  656318 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:35:18.662289  656318 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:35:18.670374  656318 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:35:18.678482  656318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:35:18.682677  656318 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:35:18.682742  656318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:35:18.717952  656318 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:35:18.726286  656318 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:35:18.734160  656318 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:35:18.741914  656318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:35:18.745795  656318 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:35:18.745854  656318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:35:18.782639  656318 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:35:18.791005  656318 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:35:18.795082  656318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 23:35:18.829997  656318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 23:35:18.871259  656318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 23:35:18.917443  656318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 23:35:18.968560  656318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 23:35:19.019600  656318 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 23:35:19.060297  656318 kubeadm.go:401] StartCluster: {Name:no-preload-313006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-313006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:35:19.060459  656318 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:35:19.060516  656318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:35:19.096921  656318 cri.go:89] found id: "7a318b0832368150c50b8e6bcc0b249c6c0f5e0835f526a9036a3f9d6818cc85"
	I1207 23:35:19.096947  656318 cri.go:89] found id: "404e1d5beb2da9d3cc45722c51fc2e1c7b0c587a72d76030ae16a0117eb8350a"
	I1207 23:35:19.096954  656318 cri.go:89] found id: "087d0f5345ac825bcf193ab138e126157b165b5aa86f1b652afd90640d7fda6e"
	I1207 23:35:19.096959  656318 cri.go:89] found id: "1902052b7fa9a51b713591332e8f8f19d13383667710cc98390abfe859d91e2c"
	I1207 23:35:19.096964  656318 cri.go:89] found id: ""
	I1207 23:35:19.097016  656318 ssh_runner.go:195] Run: sudo runc list -f json
	W1207 23:35:19.110261  656318 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:35:19Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:35:19.110457  656318 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:35:19.118474  656318 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1207 23:35:19.118492  656318 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1207 23:35:19.118538  656318 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 23:35:19.126045  656318 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:35:19.126976  656318 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-313006" does not appear in /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:35:19.127658  656318 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-389542/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-313006" cluster setting kubeconfig missing "no-preload-313006" context setting]
	I1207 23:35:19.128563  656318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:35:19.130361  656318 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 23:35:19.138196  656318 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1207 23:35:19.138225  656318 kubeadm.go:602] duration metric: took 19.726131ms to restartPrimaryControlPlane
	I1207 23:35:19.138235  656318 kubeadm.go:403] duration metric: took 77.955614ms to StartCluster
	I1207 23:35:19.138251  656318 settings.go:142] acquiring lock: {Name:mk372e79badb9c8f25216fa891cff6dfa96ea2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:35:19.138320  656318 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:35:19.140789  656318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:35:19.141076  656318 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:35:19.141139  656318 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1207 23:35:19.141265  656318 addons.go:70] Setting storage-provisioner=true in profile "no-preload-313006"
	I1207 23:35:19.141290  656318 addons.go:239] Setting addon storage-provisioner=true in "no-preload-313006"
	I1207 23:35:19.141288  656318 addons.go:70] Setting dashboard=true in profile "no-preload-313006"
	W1207 23:35:19.141304  656318 addons.go:248] addon storage-provisioner should already be in state true
	I1207 23:35:19.141312  656318 addons.go:239] Setting addon dashboard=true in "no-preload-313006"
	I1207 23:35:19.141310  656318 config.go:182] Loaded profile config "no-preload-313006": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	W1207 23:35:19.141321  656318 addons.go:248] addon dashboard should already be in state true
	I1207 23:35:19.141364  656318 host.go:66] Checking if "no-preload-313006" exists ...
	I1207 23:35:19.141376  656318 addons.go:70] Setting default-storageclass=true in profile "no-preload-313006"
	I1207 23:35:19.141392  656318 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-313006"
	I1207 23:35:19.141364  656318 host.go:66] Checking if "no-preload-313006" exists ...
	I1207 23:35:19.141736  656318 cli_runner.go:164] Run: docker container inspect no-preload-313006 --format={{.State.Status}}
	I1207 23:35:19.141908  656318 cli_runner.go:164] Run: docker container inspect no-preload-313006 --format={{.State.Status}}
	I1207 23:35:19.142215  656318 cli_runner.go:164] Run: docker container inspect no-preload-313006 --format={{.State.Status}}
	I1207 23:35:19.144950  656318 out.go:179] * Verifying Kubernetes components...
	I1207 23:35:19.146370  656318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:35:19.168034  656318 addons.go:239] Setting addon default-storageclass=true in "no-preload-313006"
	W1207 23:35:19.168061  656318 addons.go:248] addon default-storageclass should already be in state true
	I1207 23:35:19.168089  656318 host.go:66] Checking if "no-preload-313006" exists ...
	I1207 23:35:19.168608  656318 cli_runner.go:164] Run: docker container inspect no-preload-313006 --format={{.State.Status}}
	I1207 23:35:19.171207  656318 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1207 23:35:19.171237  656318 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 23:35:19.172376  656318 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:35:19.172401  656318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 23:35:19.172466  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:35:19.173497  656318 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1207 23:35:16.268349  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	W1207 23:35:18.767379  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	I1207 23:35:19.174674  656318 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1207 23:35:19.174694  656318 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1207 23:35:19.174770  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:35:19.193085  656318 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 23:35:19.193110  656318 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 23:35:19.193171  656318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:35:19.205950  656318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:35:19.207362  656318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:35:19.232071  656318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:35:19.299719  656318 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:35:19.315306  656318 node_ready.go:35] waiting up to 6m0s for node "no-preload-313006" to be "Ready" ...
	I1207 23:35:19.325691  656318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:35:19.325833  656318 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1207 23:35:19.325863  656318 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1207 23:35:19.341669  656318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 23:35:19.343525  656318 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1207 23:35:19.343552  656318 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1207 23:35:19.361500  656318 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1207 23:35:19.361525  656318 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1207 23:35:19.378454  656318 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1207 23:35:19.378479  656318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1207 23:35:19.396790  656318 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1207 23:35:19.396818  656318 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1207 23:35:19.412274  656318 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1207 23:35:19.412299  656318 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1207 23:35:19.427184  656318 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1207 23:35:19.427208  656318 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1207 23:35:19.442505  656318 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1207 23:35:19.442533  656318 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1207 23:35:19.459824  656318 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1207 23:35:19.459852  656318 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1207 23:35:19.476388  656318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1207 23:35:20.275752  656318 node_ready.go:49] node "no-preload-313006" is "Ready"
	I1207 23:35:20.275790  656318 node_ready.go:38] duration metric: took 960.419225ms for node "no-preload-313006" to be "Ready" ...
	I1207 23:35:20.275808  656318 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:35:20.275862  656318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:35:20.843041  656318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.517314986s)
	I1207 23:35:20.843106  656318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.501413543s)
	I1207 23:35:20.843277  656318 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.366851394s)
	I1207 23:35:20.843416  656318 api_server.go:72] duration metric: took 1.702306398s to wait for apiserver process to appear ...
	I1207 23:35:20.843443  656318 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:35:20.843467  656318 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1207 23:35:20.847022  656318 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-313006 addons enable metrics-server
	
	I1207 23:35:20.848990  656318 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1207 23:35:20.849018  656318 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1207 23:35:20.853374  656318 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1207 23:35:20.854578  656318 addons.go:530] duration metric: took 1.713446995s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1207 23:35:21.344271  656318 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1207 23:35:21.349572  656318 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1207 23:35:21.349610  656318 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1207 23:35:21.844301  656318 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1207 23:35:21.848684  656318 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1207 23:35:21.849641  656318 api_server.go:141] control plane version: v1.35.0-beta.0
	I1207 23:35:21.849665  656318 api_server.go:131] duration metric: took 1.006215022s to wait for apiserver health ...
	I1207 23:35:21.849676  656318 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:35:21.853103  656318 system_pods.go:59] 8 kube-system pods found
	I1207 23:35:21.853131  656318 system_pods.go:61] "coredns-7d764666f9-btjrp" [c81bd338-0a5e-4937-8442-bbacd5f685c2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:35:21.853139  656318 system_pods.go:61] "etcd-no-preload-313006" [2124ac32-ed11-49d4-b522-e0bb8b268bb1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:35:21.853145  656318 system_pods.go:61] "kindnet-nzf5r" [8d7ee556-9db1-49ce-a52b-403f54085f1f] Running
	I1207 23:35:21.853152  656318 system_pods.go:61] "kube-apiserver-no-preload-313006" [3c161ca5-34a9-4712-8eb3-6d444b18fae0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:35:21.853158  656318 system_pods.go:61] "kube-controller-manager-no-preload-313006" [8b681c4d-7203-410e-a987-5f988f352aed] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:35:21.853165  656318 system_pods.go:61] "kube-proxy-xw4pf" [ebc0bfad-9d66-4e97-ba23-878bf95416a6] Running
	I1207 23:35:21.853172  656318 system_pods.go:61] "kube-scheduler-no-preload-313006" [40d9aeaa-01fd-49cc-9e20-4339df06b915] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:35:21.853178  656318 system_pods.go:61] "storage-provisioner" [9c75fba7-bec3-421e-9f99-b51827afb29d] Running
	I1207 23:35:21.853185  656318 system_pods.go:74] duration metric: took 3.502188ms to wait for pod list to return data ...
	I1207 23:35:21.853194  656318 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:35:21.855301  656318 default_sa.go:45] found service account: "default"
	I1207 23:35:21.855321  656318 default_sa.go:55] duration metric: took 2.121154ms for default service account to be created ...
	I1207 23:35:21.855349  656318 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 23:35:21.857747  656318 system_pods.go:86] 8 kube-system pods found
	I1207 23:35:21.857774  656318 system_pods.go:89] "coredns-7d764666f9-btjrp" [c81bd338-0a5e-4937-8442-bbacd5f685c2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:35:21.857782  656318 system_pods.go:89] "etcd-no-preload-313006" [2124ac32-ed11-49d4-b522-e0bb8b268bb1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:35:21.857787  656318 system_pods.go:89] "kindnet-nzf5r" [8d7ee556-9db1-49ce-a52b-403f54085f1f] Running
	I1207 23:35:21.857793  656318 system_pods.go:89] "kube-apiserver-no-preload-313006" [3c161ca5-34a9-4712-8eb3-6d444b18fae0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:35:21.857801  656318 system_pods.go:89] "kube-controller-manager-no-preload-313006" [8b681c4d-7203-410e-a987-5f988f352aed] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:35:21.857805  656318 system_pods.go:89] "kube-proxy-xw4pf" [ebc0bfad-9d66-4e97-ba23-878bf95416a6] Running
	I1207 23:35:21.857820  656318 system_pods.go:89] "kube-scheduler-no-preload-313006" [40d9aeaa-01fd-49cc-9e20-4339df06b915] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:35:21.857827  656318 system_pods.go:89] "storage-provisioner" [9c75fba7-bec3-421e-9f99-b51827afb29d] Running
	I1207 23:35:21.857833  656318 system_pods.go:126] duration metric: took 2.478892ms to wait for k8s-apps to be running ...
	I1207 23:35:21.857843  656318 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:35:21.857886  656318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:35:21.871226  656318 system_svc.go:56] duration metric: took 13.375207ms WaitForService to wait for kubelet
	I1207 23:35:21.871251  656318 kubeadm.go:587] duration metric: took 2.730144893s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:35:21.871273  656318 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:35:21.874022  656318 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:35:21.874047  656318 node_conditions.go:123] node cpu capacity is 8
	I1207 23:35:21.874066  656318 node_conditions.go:105] duration metric: took 2.787587ms to run NodePressure ...
	I1207 23:35:21.874082  656318 start.go:242] waiting for startup goroutines ...
	I1207 23:35:21.874091  656318 start.go:247] waiting for cluster config update ...
	I1207 23:35:21.874105  656318 start.go:256] writing updated cluster config ...
	I1207 23:35:21.874408  656318 ssh_runner.go:195] Run: rm -f paused
	I1207 23:35:21.878113  656318 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:35:21.881662  656318 pod_ready.go:83] waiting for pod "coredns-7d764666f9-btjrp" in "kube-system" namespace to be "Ready" or be gone ...
	W1207 23:35:19.278553  647748 pod_ready.go:104] pod "coredns-5dd5756b68-vv8vq" is not "Ready", error: <nil>
	W1207 23:35:21.286435  647748 pod_ready.go:104] pod "coredns-5dd5756b68-vv8vq" is not "Ready", error: <nil>
	I1207 23:35:17.851662  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:35:17.852200  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:35:17.852262  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:35:17.852348  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:35:17.883096  610371 cri.go:89] found id: "a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:17.883119  610371 cri.go:89] found id: ""
	I1207 23:35:17.883129  610371 logs.go:282] 1 containers: [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96]
	I1207 23:35:17.883192  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:17.887460  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:35:17.887546  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:35:17.916961  610371 cri.go:89] found id: ""
	I1207 23:35:17.916994  610371 logs.go:282] 0 containers: []
	W1207 23:35:17.917006  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:35:17.917014  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:35:17.917075  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:35:17.945282  610371 cri.go:89] found id: ""
	I1207 23:35:17.945307  610371 logs.go:282] 0 containers: []
	W1207 23:35:17.945317  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:35:17.945335  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:35:17.945398  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:35:17.975415  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:17.975435  610371 cri.go:89] found id: ""
	I1207 23:35:17.975446  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:35:17.975502  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:17.979886  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:35:17.979942  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:35:18.008898  610371 cri.go:89] found id: ""
	I1207 23:35:18.008922  610371 logs.go:282] 0 containers: []
	W1207 23:35:18.008932  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:35:18.008941  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:35:18.008998  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:35:18.037934  610371 cri.go:89] found id: "0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:18.037963  610371 cri.go:89] found id: ""
	I1207 23:35:18.037975  610371 logs.go:282] 1 containers: [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b]
	I1207 23:35:18.038039  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:18.042097  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:35:18.042153  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:35:18.071090  610371 cri.go:89] found id: ""
	I1207 23:35:18.071116  610371 logs.go:282] 0 containers: []
	W1207 23:35:18.071128  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:35:18.071135  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:35:18.071203  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:35:18.100284  610371 cri.go:89] found id: ""
	I1207 23:35:18.100317  610371 logs.go:282] 0 containers: []
	W1207 23:35:18.100353  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:35:18.100366  610371 logs.go:123] Gathering logs for kube-controller-manager [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b] ...
	I1207 23:35:18.100383  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:18.131797  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:35:18.131829  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:35:18.187969  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:35:18.187999  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:35:18.219984  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:35:18.220013  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:35:18.317010  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:35:18.317047  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:35:18.350183  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:35:18.350217  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:35:18.414137  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:35:18.414161  610371 logs.go:123] Gathering logs for kube-apiserver [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96] ...
	I1207 23:35:18.414177  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:18.458306  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:35:18.458356  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:20.987397  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:35:20.987852  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:35:20.987919  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:35:20.987991  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:35:21.015394  610371 cri.go:89] found id: "a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:21.015416  610371 cri.go:89] found id: ""
	I1207 23:35:21.015424  610371 logs.go:282] 1 containers: [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96]
	I1207 23:35:21.015476  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:21.019925  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:35:21.020017  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:35:21.049407  610371 cri.go:89] found id: ""
	I1207 23:35:21.049437  610371 logs.go:282] 0 containers: []
	W1207 23:35:21.049449  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:35:21.049458  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:35:21.049516  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:35:21.080281  610371 cri.go:89] found id: ""
	I1207 23:35:21.080304  610371 logs.go:282] 0 containers: []
	W1207 23:35:21.080312  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:35:21.080319  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:35:21.080393  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:35:21.107885  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:21.107907  610371 cri.go:89] found id: ""
	I1207 23:35:21.107917  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:35:21.107981  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:21.111937  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:35:21.111992  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:35:21.138289  610371 cri.go:89] found id: ""
	I1207 23:35:21.138322  610371 logs.go:282] 0 containers: []
	W1207 23:35:21.138353  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:35:21.138363  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:35:21.138438  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:35:21.172080  610371 cri.go:89] found id: "0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:21.172102  610371 cri.go:89] found id: ""
	I1207 23:35:21.172110  610371 logs.go:282] 1 containers: [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b]
	I1207 23:35:21.172161  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:21.176899  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:35:21.176960  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:35:21.206810  610371 cri.go:89] found id: ""
	I1207 23:35:21.206840  610371 logs.go:282] 0 containers: []
	W1207 23:35:21.206851  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:35:21.206861  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:35:21.206984  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:35:21.234708  610371 cri.go:89] found id: ""
	I1207 23:35:21.234738  610371 logs.go:282] 0 containers: []
	W1207 23:35:21.234750  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:35:21.234761  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:35:21.234774  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:35:21.271854  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:35:21.271887  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:35:21.393167  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:35:21.393215  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:35:21.433752  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:35:21.433790  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:35:21.502990  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:35:21.503011  610371 logs.go:123] Gathering logs for kube-apiserver [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96] ...
	I1207 23:35:21.503026  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:21.539520  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:35:21.539556  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:21.567587  610371 logs.go:123] Gathering logs for kube-controller-manager [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b] ...
	I1207 23:35:21.567614  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:21.595533  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:35:21.595560  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1207 23:35:20.768645  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	W1207 23:35:23.268591  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	W1207 23:35:23.777530  647748 pod_ready.go:104] pod "coredns-5dd5756b68-vv8vq" is not "Ready", error: <nil>
	I1207 23:35:24.780184  647748 pod_ready.go:94] pod "coredns-5dd5756b68-vv8vq" is "Ready"
	I1207 23:35:24.780215  647748 pod_ready.go:86] duration metric: took 34.50798829s for pod "coredns-5dd5756b68-vv8vq" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:24.785416  647748 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:24.792173  647748 pod_ready.go:94] pod "etcd-old-k8s-version-320477" is "Ready"
	I1207 23:35:24.792206  647748 pod_ready.go:86] duration metric: took 6.754925ms for pod "etcd-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:24.795277  647748 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:24.800544  647748 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-320477" is "Ready"
	I1207 23:35:24.800574  647748 pod_ready.go:86] duration metric: took 5.271021ms for pod "kube-apiserver-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:24.803774  647748 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:24.975538  647748 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-320477" is "Ready"
	I1207 23:35:24.975568  647748 pod_ready.go:86] duration metric: took 171.769801ms for pod "kube-controller-manager-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:25.176874  647748 pod_ready.go:83] waiting for pod "kube-proxy-vlx4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:25.576538  647748 pod_ready.go:94] pod "kube-proxy-vlx4n" is "Ready"
	I1207 23:35:25.576571  647748 pod_ready.go:86] duration metric: took 399.665404ms for pod "kube-proxy-vlx4n" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:25.777348  647748 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:26.176424  647748 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-320477" is "Ready"
	I1207 23:35:26.176458  647748 pod_ready.go:86] duration metric: took 399.077633ms for pod "kube-scheduler-old-k8s-version-320477" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:26.176474  647748 pod_ready.go:40] duration metric: took 35.908019433s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:35:26.241491  647748 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1207 23:35:26.246676  647748 out.go:203] 
	W1207 23:35:26.248164  647748 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1207 23:35:26.249591  647748 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1207 23:35:26.250938  647748 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-320477" cluster and "default" namespace by default
	W1207 23:35:23.887383  656318 pod_ready.go:104] pod "coredns-7d764666f9-btjrp" is not "Ready", error: <nil>
	W1207 23:35:25.887666  656318 pod_ready.go:104] pod "coredns-7d764666f9-btjrp" is not "Ready", error: <nil>
	I1207 23:35:24.145199  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:35:24.145681  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:35:24.145739  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:35:24.145803  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:35:24.174832  610371 cri.go:89] found id: "a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:24.174853  610371 cri.go:89] found id: ""
	I1207 23:35:24.174863  610371 logs.go:282] 1 containers: [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96]
	I1207 23:35:24.174926  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:24.178925  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:35:24.178994  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:35:24.207384  610371 cri.go:89] found id: ""
	I1207 23:35:24.207408  610371 logs.go:282] 0 containers: []
	W1207 23:35:24.207416  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:35:24.207422  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:35:24.207477  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:35:24.235647  610371 cri.go:89] found id: ""
	I1207 23:35:24.235672  610371 logs.go:282] 0 containers: []
	W1207 23:35:24.235683  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:35:24.235691  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:35:24.235751  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:35:24.261906  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:24.261935  610371 cri.go:89] found id: ""
	I1207 23:35:24.261945  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:35:24.262006  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:24.266007  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:35:24.266081  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:35:24.294095  610371 cri.go:89] found id: ""
	I1207 23:35:24.294119  610371 logs.go:282] 0 containers: []
	W1207 23:35:24.294127  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:35:24.294133  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:35:24.294185  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:35:24.323467  610371 cri.go:89] found id: "0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:24.323493  610371 cri.go:89] found id: ""
	I1207 23:35:24.323504  610371 logs.go:282] 1 containers: [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b]
	I1207 23:35:24.323570  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:24.328418  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:35:24.328498  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:35:24.356897  610371 cri.go:89] found id: ""
	I1207 23:35:24.356925  610371 logs.go:282] 0 containers: []
	W1207 23:35:24.356933  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:35:24.356941  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:35:24.357008  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:35:24.383099  610371 cri.go:89] found id: ""
	I1207 23:35:24.383129  610371 logs.go:282] 0 containers: []
	W1207 23:35:24.383139  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:35:24.383151  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:35:24.383166  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:35:24.446213  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:35:24.446233  610371 logs.go:123] Gathering logs for kube-apiserver [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96] ...
	I1207 23:35:24.446246  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:24.483568  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:35:24.483599  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:24.513874  610371 logs.go:123] Gathering logs for kube-controller-manager [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b] ...
	I1207 23:35:24.513902  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:24.542160  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:35:24.542188  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:35:24.596417  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:35:24.596462  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:35:24.629041  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:35:24.629071  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:35:24.730686  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:35:24.730734  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:35:27.278787  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:35:27.279224  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:35:27.279287  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:35:27.279379  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:35:27.313549  610371 cri.go:89] found id: "a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:27.313579  610371 cri.go:89] found id: ""
	I1207 23:35:27.313590  610371 logs.go:282] 1 containers: [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96]
	I1207 23:35:27.313658  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:27.317990  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:35:27.318066  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:35:27.348758  610371 cri.go:89] found id: ""
	I1207 23:35:27.348790  610371 logs.go:282] 0 containers: []
	W1207 23:35:27.348801  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:35:27.348809  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:35:27.348862  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:35:27.378751  610371 cri.go:89] found id: ""
	I1207 23:35:27.378781  610371 logs.go:282] 0 containers: []
	W1207 23:35:27.378792  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:35:27.378800  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:35:27.378863  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:35:27.409470  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:27.409496  610371 cri.go:89] found id: ""
	I1207 23:35:27.409507  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:35:27.409573  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:27.413743  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:35:27.413803  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:35:27.438886  610371 cri.go:89] found id: ""
	I1207 23:35:27.438908  610371 logs.go:282] 0 containers: []
	W1207 23:35:27.438915  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:35:27.438922  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:35:27.438969  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:35:27.465832  610371 cri.go:89] found id: "0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:27.465851  610371 cri.go:89] found id: ""
	I1207 23:35:27.465859  610371 logs.go:282] 1 containers: [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b]
	I1207 23:35:27.465907  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:27.470028  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:35:27.470087  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:35:27.498004  610371 cri.go:89] found id: ""
	I1207 23:35:27.498031  610371 logs.go:282] 0 containers: []
	W1207 23:35:27.498040  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:35:27.498046  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:35:27.498104  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:35:27.531695  610371 cri.go:89] found id: ""
	I1207 23:35:27.531726  610371 logs.go:282] 0 containers: []
	W1207 23:35:27.531738  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:35:27.531752  610371 logs.go:123] Gathering logs for kube-apiserver [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96] ...
	I1207 23:35:27.531770  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:27.567961  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:35:27.567996  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:27.597994  610371 logs.go:123] Gathering logs for kube-controller-manager [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b] ...
	I1207 23:35:27.598027  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:27.624755  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:35:27.624783  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:35:27.673747  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:35:27.673788  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:35:27.705594  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:35:27.705622  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1207 23:35:25.771909  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	W1207 23:35:28.268503  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	W1207 23:35:27.888403  656318 pod_ready.go:104] pod "coredns-7d764666f9-btjrp" is not "Ready", error: <nil>
	W1207 23:35:30.395579  656318 pod_ready.go:104] pod "coredns-7d764666f9-btjrp" is not "Ready", error: <nil>
	I1207 23:35:27.796064  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:35:27.796102  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:35:27.828122  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:35:27.828157  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:35:27.900211  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:35:30.402302  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:35:30.402826  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:35:30.402884  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:35:30.402941  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:35:30.432100  610371 cri.go:89] found id: "a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:30.432124  610371 cri.go:89] found id: ""
	I1207 23:35:30.432134  610371 logs.go:282] 1 containers: [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96]
	I1207 23:35:30.432199  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:30.436216  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:35:30.436285  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:35:30.463194  610371 cri.go:89] found id: ""
	I1207 23:35:30.463222  610371 logs.go:282] 0 containers: []
	W1207 23:35:30.463234  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:35:30.463242  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:35:30.463305  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:35:30.490300  610371 cri.go:89] found id: ""
	I1207 23:35:30.490345  610371 logs.go:282] 0 containers: []
	W1207 23:35:30.490366  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:35:30.490373  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:35:30.490471  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:35:30.519350  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:30.519375  610371 cri.go:89] found id: ""
	I1207 23:35:30.519386  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:35:30.519448  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:30.524212  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:35:30.524281  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:35:30.556293  610371 cri.go:89] found id: ""
	I1207 23:35:30.556341  610371 logs.go:282] 0 containers: []
	W1207 23:35:30.556353  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:35:30.556361  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:35:30.556420  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:35:30.585462  610371 cri.go:89] found id: "0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:30.585485  610371 cri.go:89] found id: ""
	I1207 23:35:30.585495  610371 logs.go:282] 1 containers: [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b]
	I1207 23:35:30.585560  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:30.589797  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:35:30.589875  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:35:30.617489  610371 cri.go:89] found id: ""
	I1207 23:35:30.617519  610371 logs.go:282] 0 containers: []
	W1207 23:35:30.617527  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:35:30.617534  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:35:30.617590  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:35:30.646366  610371 cri.go:89] found id: ""
	I1207 23:35:30.646397  610371 logs.go:282] 0 containers: []
	W1207 23:35:30.646409  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:35:30.646420  610371 logs.go:123] Gathering logs for kube-apiserver [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96] ...
	I1207 23:35:30.646439  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:30.680062  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:35:30.680097  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:30.707582  610371 logs.go:123] Gathering logs for kube-controller-manager [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b] ...
	I1207 23:35:30.707620  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:30.737601  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:35:30.737631  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:35:30.788229  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:35:30.788262  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:35:30.819038  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:35:30.819064  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:35:30.905293  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:35:30.905341  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:35:30.938667  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:35:30.938699  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:35:30.995828  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1207 23:35:30.767929  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	W1207 23:35:33.268083  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	W1207 23:35:35.268148  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	W1207 23:35:32.886623  656318 pod_ready.go:104] pod "coredns-7d764666f9-btjrp" is not "Ready", error: <nil>
	W1207 23:35:34.887514  656318 pod_ready.go:104] pod "coredns-7d764666f9-btjrp" is not "Ready", error: <nil>
	W1207 23:35:36.888219  656318 pod_ready.go:104] pod "coredns-7d764666f9-btjrp" is not "Ready", error: <nil>
	I1207 23:35:33.496490  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:35:33.496969  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:35:33.497025  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:35:33.497077  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:35:33.526638  610371 cri.go:89] found id: "a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:33.526662  610371 cri.go:89] found id: ""
	I1207 23:35:33.526671  610371 logs.go:282] 1 containers: [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96]
	I1207 23:35:33.526724  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:33.530825  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:35:33.530886  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:35:33.558539  610371 cri.go:89] found id: ""
	I1207 23:35:33.558571  610371 logs.go:282] 0 containers: []
	W1207 23:35:33.558582  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:35:33.558590  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:35:33.558662  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:35:33.588286  610371 cri.go:89] found id: ""
	I1207 23:35:33.588313  610371 logs.go:282] 0 containers: []
	W1207 23:35:33.588340  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:35:33.588350  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:35:33.588418  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:35:33.617392  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:33.617413  610371 cri.go:89] found id: ""
	I1207 23:35:33.617422  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:35:33.617497  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:33.621633  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:35:33.621701  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:35:33.650021  610371 cri.go:89] found id: ""
	I1207 23:35:33.650052  610371 logs.go:282] 0 containers: []
	W1207 23:35:33.650063  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:35:33.650072  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:35:33.650130  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:35:33.679493  610371 cri.go:89] found id: "0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:33.679515  610371 cri.go:89] found id: ""
	I1207 23:35:33.679528  610371 logs.go:282] 1 containers: [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b]
	I1207 23:35:33.679578  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:33.684158  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:35:33.684242  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:35:33.713020  610371 cri.go:89] found id: ""
	I1207 23:35:33.713054  610371 logs.go:282] 0 containers: []
	W1207 23:35:33.713065  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:35:33.713072  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:35:33.713133  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:35:33.741503  610371 cri.go:89] found id: ""
	I1207 23:35:33.741546  610371 logs.go:282] 0 containers: []
	W1207 23:35:33.741560  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:35:33.741572  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:35:33.741589  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:33.769103  610371 logs.go:123] Gathering logs for kube-controller-manager [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b] ...
	I1207 23:35:33.769130  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:33.796567  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:35:33.796597  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:35:33.848201  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:35:33.848239  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:35:33.880229  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:35:33.880268  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:35:33.972822  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:35:33.972857  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1207 23:35:34.006071  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:35:34.006106  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:35:34.063824  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:35:34.063842  610371 logs.go:123] Gathering logs for kube-apiserver [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96] ...
	I1207 23:35:34.063856  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:36.597353  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:35:36.597745  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:35:36.597800  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1207 23:35:36.597854  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1207 23:35:36.624901  610371 cri.go:89] found id: "a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:36.624920  610371 cri.go:89] found id: ""
	I1207 23:35:36.624928  610371 logs.go:282] 1 containers: [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96]
	I1207 23:35:36.624984  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:36.629123  610371 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1207 23:35:36.629190  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1207 23:35:36.657786  610371 cri.go:89] found id: ""
	I1207 23:35:36.657811  610371 logs.go:282] 0 containers: []
	W1207 23:35:36.657819  610371 logs.go:284] No container was found matching "etcd"
	I1207 23:35:36.657826  610371 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1207 23:35:36.657889  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1207 23:35:36.687422  610371 cri.go:89] found id: ""
	I1207 23:35:36.687448  610371 logs.go:282] 0 containers: []
	W1207 23:35:36.687457  610371 logs.go:284] No container was found matching "coredns"
	I1207 23:35:36.687463  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1207 23:35:36.687535  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1207 23:35:36.715591  610371 cri.go:89] found id: "7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:36.715619  610371 cri.go:89] found id: ""
	I1207 23:35:36.715631  610371 logs.go:282] 1 containers: [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f]
	I1207 23:35:36.715697  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:36.720183  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1207 23:35:36.720259  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1207 23:35:36.747310  610371 cri.go:89] found id: ""
	I1207 23:35:36.747346  610371 logs.go:282] 0 containers: []
	W1207 23:35:36.747358  610371 logs.go:284] No container was found matching "kube-proxy"
	I1207 23:35:36.747366  610371 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1207 23:35:36.747419  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1207 23:35:36.775096  610371 cri.go:89] found id: "0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:36.775122  610371 cri.go:89] found id: ""
	I1207 23:35:36.775130  610371 logs.go:282] 1 containers: [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b]
	I1207 23:35:36.775179  610371 ssh_runner.go:195] Run: which crictl
	I1207 23:35:36.779113  610371 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1207 23:35:36.779201  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1207 23:35:36.806689  610371 cri.go:89] found id: ""
	I1207 23:35:36.806715  610371 logs.go:282] 0 containers: []
	W1207 23:35:36.806724  610371 logs.go:284] No container was found matching "kindnet"
	I1207 23:35:36.806732  610371 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1207 23:35:36.806794  610371 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1207 23:35:36.833714  610371 cri.go:89] found id: ""
	I1207 23:35:36.833743  610371 logs.go:282] 0 containers: []
	W1207 23:35:36.833755  610371 logs.go:284] No container was found matching "storage-provisioner"
	I1207 23:35:36.833768  610371 logs.go:123] Gathering logs for describe nodes ...
	I1207 23:35:36.833788  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1207 23:35:36.892869  610371 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1207 23:35:36.892889  610371 logs.go:123] Gathering logs for kube-apiserver [a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96] ...
	I1207 23:35:36.892904  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a2bef0bffd4b2a9a0ab1a7bb11a1e7a75869c1a1695637a3f65d3a44c0cabb96"
	I1207 23:35:36.929341  610371 logs.go:123] Gathering logs for kube-scheduler [7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f] ...
	I1207 23:35:36.929379  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7537a84c0aadf0ca7982a5cecb52a78b3f0df56967f8d1cfc94e275439b4a11f"
	I1207 23:35:36.958723  610371 logs.go:123] Gathering logs for kube-controller-manager [0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b] ...
	I1207 23:35:36.958755  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0071d79178392492d58a8bb7b444a4a9aca44d7ecc5427e8ad7857e6c791cf2b"
	I1207 23:35:36.987042  610371 logs.go:123] Gathering logs for CRI-O ...
	I1207 23:35:36.987069  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1207 23:35:37.036685  610371 logs.go:123] Gathering logs for container status ...
	I1207 23:35:37.036721  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1207 23:35:37.067894  610371 logs.go:123] Gathering logs for kubelet ...
	I1207 23:35:37.067928  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1207 23:35:37.153427  610371 logs.go:123] Gathering logs for dmesg ...
	I1207 23:35:37.153465  610371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1207 23:35:37.768505  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	W1207 23:35:40.269131  648820 node_ready.go:57] node "embed-certs-654118" has "Ready":"False" status (will retry)
	I1207 23:35:39.685726  610371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:35:39.686266  610371 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1207 23:35:39.686369  610371 kubeadm.go:602] duration metric: took 4m1.634419702s to restartPrimaryControlPlane
	W1207 23:35:39.686435  610371 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1207 23:35:39.686491  610371 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1207 23:35:40.281086  610371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:35:40.296250  610371 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 23:35:40.306090  610371 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1207 23:35:40.306167  610371 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 23:35:40.315128  610371 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 23:35:40.315150  610371 kubeadm.go:158] found existing configuration files:
	
	I1207 23:35:40.315203  610371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1207 23:35:40.324757  610371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1207 23:35:40.324824  610371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1207 23:35:40.333716  610371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1207 23:35:40.343236  610371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1207 23:35:40.343402  610371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1207 23:35:40.353044  610371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1207 23:35:40.361443  610371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1207 23:35:40.361512  610371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1207 23:35:40.370148  610371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1207 23:35:40.379620  610371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1207 23:35:40.379676  610371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1207 23:35:40.390202  610371 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1207 23:35:40.429571  610371 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1207 23:35:40.429637  610371 kubeadm.go:319] [preflight] Running pre-flight checks
	I1207 23:35:40.509163  610371 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1207 23:35:40.509296  610371 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1207 23:35:40.509397  610371 kubeadm.go:319] OS: Linux
	I1207 23:35:40.509462  610371 kubeadm.go:319] CGROUPS_CPU: enabled
	I1207 23:35:40.509544  610371 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1207 23:35:40.509619  610371 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1207 23:35:40.509689  610371 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1207 23:35:40.509789  610371 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1207 23:35:40.509859  610371 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1207 23:35:40.509939  610371 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1207 23:35:40.510020  610371 kubeadm.go:319] CGROUPS_IO: enabled
	I1207 23:35:40.583318  610371 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 23:35:40.583494  610371 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 23:35:40.583648  610371 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1207 23:35:40.590554  610371 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 23:35:40.592431  610371 out.go:252]   - Generating certificates and keys ...
	I1207 23:35:40.592504  610371 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1207 23:35:40.592589  610371 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1207 23:35:40.592695  610371 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1207 23:35:40.592781  610371 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1207 23:35:40.592877  610371 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1207 23:35:40.592958  610371 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1207 23:35:40.593066  610371 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1207 23:35:40.593127  610371 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1207 23:35:40.593194  610371 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1207 23:35:40.593267  610371 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1207 23:35:40.593316  610371 kubeadm.go:319] [certs] Using the existing "sa" key
	I1207 23:35:40.593406  610371 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 23:35:40.723559  610371 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 23:35:40.857538  610371 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 23:35:40.940475  610371 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 23:35:41.071064  610371 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 23:35:41.184580  610371 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 23:35:41.185156  610371 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 23:35:41.187797  610371 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1207 23:35:39.387660  656318 pod_ready.go:104] pod "coredns-7d764666f9-btjrp" is not "Ready", error: <nil>
	W1207 23:35:41.387949  656318 pod_ready.go:104] pod "coredns-7d764666f9-btjrp" is not "Ready", error: <nil>
	I1207 23:35:41.193007  610371 out.go:252]   - Booting up control plane ...
	I1207 23:35:41.193173  610371 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 23:35:41.193274  610371 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 23:35:41.193398  610371 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 23:35:41.207031  610371 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 23:35:41.207202  610371 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1207 23:35:41.215350  610371 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1207 23:35:41.215659  610371 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 23:35:41.215727  610371 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1207 23:35:41.326732  610371 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1207 23:35:41.326886  610371 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1207 23:35:41.827460  610371 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 500.825465ms
	I1207 23:35:41.831933  610371 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1207 23:35:41.832076  610371 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1207 23:35:41.832223  610371 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1207 23:35:41.832292  610371 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Dec 07 23:35:08 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:08.106389663Z" level=info msg="Created container ce7324d8aac62ae7c0aa0221635e72e96bfcd16abd09a61ad8cef4c7e66ca07f: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-p5lgr/kubernetes-dashboard" id=099c5ed2-69d4-4f69-8f98-53d05fa1b45e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:35:08 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:08.107049574Z" level=info msg="Starting container: ce7324d8aac62ae7c0aa0221635e72e96bfcd16abd09a61ad8cef4c7e66ca07f" id=b188801a-7f0c-43ec-8825-7ffd282d936b name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:35:08 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:08.108893396Z" level=info msg="Started container" PID=1715 containerID=ce7324d8aac62ae7c0aa0221635e72e96bfcd16abd09a61ad8cef4c7e66ca07f description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-p5lgr/kubernetes-dashboard id=b188801a-7f0c-43ec-8825-7ffd282d936b name=/runtime.v1.RuntimeService/StartContainer sandboxID=673d09231e7616d4762786ffd70413008d5bca0a22552eca8c69832d3da4d9ae
	Dec 07 23:35:20 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:20.152962497Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a2cbfe45-c956-468f-be19-9379f658b5c6 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:35:20 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:20.153960897Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=82a1259e-151a-4b02-a098-6630a01f2b58 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:35:20 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:20.154966516Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=1fb9ab7c-b588-453e-9166-ee030bc482b0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:35:20 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:20.155107079Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:35:20 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:20.160146282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:35:20 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:20.160367356Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b78889e500dfba489acee2a4b2fec51114d9d5b72c5e3c7f3c4b1437713ba549/merged/etc/passwd: no such file or directory"
	Dec 07 23:35:20 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:20.160408279Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b78889e500dfba489acee2a4b2fec51114d9d5b72c5e3c7f3c4b1437713ba549/merged/etc/group: no such file or directory"
	Dec 07 23:35:20 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:20.160758969Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:35:20 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:20.20507523Z" level=info msg="Created container 4b439bad9ad85b6dcd7bc9ce303a25519ec7b97359492cd12f2b5f913bfe91d6: kube-system/storage-provisioner/storage-provisioner" id=1fb9ab7c-b588-453e-9166-ee030bc482b0 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:35:20 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:20.205750132Z" level=info msg="Starting container: 4b439bad9ad85b6dcd7bc9ce303a25519ec7b97359492cd12f2b5f913bfe91d6" id=644bd256-9257-4470-a78b-dd7d56009617 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:35:20 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:20.207820773Z" level=info msg="Started container" PID=1738 containerID=4b439bad9ad85b6dcd7bc9ce303a25519ec7b97359492cd12f2b5f913bfe91d6 description=kube-system/storage-provisioner/storage-provisioner id=644bd256-9257-4470-a78b-dd7d56009617 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4aaa9811f6442560618bf8c3587c3de8b7e1d770f1e311131198cbd3a8fd9766
	Dec 07 23:35:25 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:25.034234725Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5e5e0198-546a-4956-a03b-9e077fb30431 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:35:25 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:25.035830137Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bad4ac84-ca5b-4162-a1ac-c091b1c96ab6 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:35:25 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:25.037141075Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ksnsk/dashboard-metrics-scraper" id=72b0d225-c733-4974-918a-8dc9988a1121 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:35:25 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:25.037313199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:35:25 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:25.046680405Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:35:25 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:25.047417035Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:35:25 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:25.086166985Z" level=info msg="Created container 8b580c253981d8b8c79bb5abf64e0fc2d20cb1697c918a63e8051b60454e5e75: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ksnsk/dashboard-metrics-scraper" id=72b0d225-c733-4974-918a-8dc9988a1121 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:35:25 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:25.086937566Z" level=info msg="Starting container: 8b580c253981d8b8c79bb5abf64e0fc2d20cb1697c918a63e8051b60454e5e75" id=7c361fcf-d4fc-4919-a7e3-fe91585df4af name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:35:25 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:25.090467033Z" level=info msg="Started container" PID=1753 containerID=8b580c253981d8b8c79bb5abf64e0fc2d20cb1697c918a63e8051b60454e5e75 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ksnsk/dashboard-metrics-scraper id=7c361fcf-d4fc-4919-a7e3-fe91585df4af name=/runtime.v1.RuntimeService/StartContainer sandboxID=fbe2e83f51aa768d059dc865706a5132983064fe63d5f1b171980434174cc148
	Dec 07 23:35:25 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:25.168567623Z" level=info msg="Removing container: 0525b9e594e4b95cd54e7455a340083b94c2548aed57b0c0964ba689f8a815be" id=a58439ef-6794-4281-af79-26c2689ec483 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 07 23:35:25 old-k8s-version-320477 crio[563]: time="2025-12-07T23:35:25.182701617Z" level=info msg="Removed container 0525b9e594e4b95cd54e7455a340083b94c2548aed57b0c0964ba689f8a815be: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ksnsk/dashboard-metrics-scraper" id=a58439ef-6794-4281-af79-26c2689ec483 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	8b580c253981d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   fbe2e83f51aa7       dashboard-metrics-scraper-5f989dc9cf-ksnsk       kubernetes-dashboard
	4b439bad9ad85       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   4aaa9811f6442       storage-provisioner                              kube-system
	ce7324d8aac62       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   35 seconds ago      Running             kubernetes-dashboard        0                   673d09231e761       kubernetes-dashboard-8694d4445c-p5lgr            kubernetes-dashboard
	0292e466a3104       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   2f02c60fea14c       busybox                                          default
	e5802a25760f8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           54 seconds ago      Running             coredns                     0                   29eb706c8139b       coredns-5dd5756b68-vv8vq                         kube-system
	3a169be3b9431       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   ce28c70449e99       kindnet-gnv88                                    kube-system
	48fc3f42e00b1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   4aaa9811f6442       storage-provisioner                              kube-system
	7ac02f5275ac1       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           54 seconds ago      Running             kube-proxy                  0                   ff02be16e7894       kube-proxy-vlx4n                                 kube-system
	935941a2cb637       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           57 seconds ago      Running             kube-apiserver              0                   4b91391978d24       kube-apiserver-old-k8s-version-320477            kube-system
	3699584e5acbb       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           57 seconds ago      Running             kube-controller-manager     0                   772d5c5546d5f       kube-controller-manager-old-k8s-version-320477   kube-system
	a21fad74c0501       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           57 seconds ago      Running             kube-scheduler              0                   25083588cc9dc       kube-scheduler-old-k8s-version-320477            kube-system
	9a8b863541694       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           57 seconds ago      Running             etcd                        0                   0d45412d81bb6       etcd-old-k8s-version-320477                      kube-system
	
	
	==> coredns [e5802a25760f8ce1babbff8e5ab0d37753e4c8f06edd2c4595f17533c8d75cb8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36868 - 38104 "HINFO IN 1738503828150855575.3130993462533399884. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021146681s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-320477
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-320477
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=old-k8s-version-320477
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_33_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:33:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-320477
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:35:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:35:19 +0000   Sun, 07 Dec 2025 23:33:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:35:19 +0000   Sun, 07 Dec 2025 23:33:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:35:19 +0000   Sun, 07 Dec 2025 23:33:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:35:19 +0000   Sun, 07 Dec 2025 23:34:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-320477
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                94c12e17-34f4-4521-b4e4-c632ca1c3651
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-5dd5756b68-vv8vq                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-old-k8s-version-320477                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m4s
	  kube-system                 kindnet-gnv88                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-320477             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-old-k8s-version-320477    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-vlx4n                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-320477             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-ksnsk        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-p5lgr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 110s                 kube-proxy       
	  Normal  Starting                 54s                  kube-proxy       
	  Normal  Starting                 2m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-320477 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-320477 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-320477 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m4s                 kubelet          Node old-k8s-version-320477 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m4s                 kubelet          Node old-k8s-version-320477 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m4s                 kubelet          Node old-k8s-version-320477 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m4s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s                 node-controller  Node old-k8s-version-320477 event: Registered Node old-k8s-version-320477 in Controller
	  Normal  NodeReady                97s                  kubelet          Node old-k8s-version-320477 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node old-k8s-version-320477 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node old-k8s-version-320477 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node old-k8s-version-320477 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                  node-controller  Node old-k8s-version-320477 event: Registered Node old-k8s-version-320477 in Controller
	
	
	==> dmesg <==
	[  +0.006319] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.495443] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006323] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494714] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006745] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494455] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007157] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493953] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007413] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493695] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007143] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493798] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007702] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493076] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008458] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493060] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008891] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492811] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007996] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493243] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008588] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492559] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008931] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.491699] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.010378] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	
	
	==> etcd [9a8b8635416941bed89621f1e677d2a500361f4b4b1de6dac578300985bf3afc] <==
	{"level":"info","ts":"2025-12-07T23:34:46.626748Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-07T23:34:46.626764Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-07T23:34:46.626658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-12-07T23:34:46.626905Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-12-07T23:34:46.627043Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-07T23:34:46.627077Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-07T23:34:46.629254Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-07T23:34:46.629403Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-07T23:34:46.629457Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-12-07T23:34:46.629587Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-07T23:34:46.629626Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-07T23:34:47.9187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-07T23:34:47.918742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-07T23:34:47.918756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-12-07T23:34:47.918767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-12-07T23:34:47.918772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-07T23:34:47.918789Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-12-07T23:34:47.918803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-12-07T23:34:47.91983Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-320477 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-07T23:34:47.919875Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-07T23:34:47.919923Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-07T23:34:47.920125Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-07T23:34:47.920185Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-07T23:34:47.92202Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-12-07T23:34:47.922266Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 23:35:43 up  2:18,  0 user,  load average: 1.79, 2.07, 1.77
	Linux old-k8s-version-320477 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3a169be3b943116304e4ac0add496f779a883bd6c9970be5183cbf2572dd3b72] <==
	I1207 23:34:49.701207       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 23:34:49.701716       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1207 23:34:49.701978       1 main.go:148] setting mtu 1500 for CNI 
	I1207 23:34:49.702000       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 23:34:49.702037       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T23:34:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 23:34:49.993190       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 23:34:50.040525       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 23:34:50.040628       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 23:34:50.041129       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 23:34:50.441165       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 23:34:50.441193       1 metrics.go:72] Registering metrics
	I1207 23:34:50.441245       1 controller.go:711] "Syncing nftables rules"
	I1207 23:34:59.946430       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:34:59.946482       1 main.go:301] handling current node
	I1207 23:35:09.944958       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:35:09.944995       1 main.go:301] handling current node
	I1207 23:35:19.944836       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:35:19.944871       1 main.go:301] handling current node
	I1207 23:35:29.946770       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:35:29.946813       1 main.go:301] handling current node
	I1207 23:35:39.949524       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:35:39.949559       1 main.go:301] handling current node
	
	
	==> kube-apiserver [935941a2cb637af36928ffb8fe952a120096af31c3a4cf9940d0decdc9dd0ffb] <==
	I1207 23:34:49.033352       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1207 23:34:49.035003       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1207 23:34:49.035072       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1207 23:34:49.035685       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1207 23:34:49.041500       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1207 23:34:49.041548       1 shared_informer.go:318] Caches are synced for configmaps
	I1207 23:34:49.041530       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1207 23:34:49.042413       1 aggregator.go:166] initial CRD sync complete...
	I1207 23:34:49.042468       1 autoregister_controller.go:141] Starting autoregister controller
	I1207 23:34:49.042476       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1207 23:34:49.042485       1 cache.go:39] Caches are synced for autoregister controller
	E1207 23:34:49.060401       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1207 23:34:49.075890       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 23:34:49.936418       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1207 23:34:50.107682       1 controller.go:624] quota admission added evaluator for: namespaces
	I1207 23:34:50.140806       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1207 23:34:50.160476       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 23:34:50.168297       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 23:34:50.175539       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1207 23:34:50.212300       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.200.123"}
	I1207 23:34:50.226620       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.128.239"}
	I1207 23:35:01.729214       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:35:01.729264       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:35:01.791975       1 controller.go:624] quota admission added evaluator for: endpoints
	I1207 23:35:01.815923       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [3699584e5acbb7ce5f69043c7f75a0d7f118a2286a1460827d4e7093b932ea8f] <==
	I1207 23:35:01.847272       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.697µs"
	I1207 23:35:01.851798       1 shared_informer.go:318] Caches are synced for resource quota
	I1207 23:35:01.852157       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.864µs"
	I1207 23:35:01.853700       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.124882ms"
	I1207 23:35:01.853785       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="49.313µs"
	I1207 23:35:01.860034       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="39.012µs"
	I1207 23:35:01.869247       1 shared_informer.go:318] Caches are synced for stateful set
	I1207 23:35:01.895798       1 shared_informer.go:318] Caches are synced for disruption
	I1207 23:35:01.910319       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1207 23:35:01.940172       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1207 23:35:01.940185       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1207 23:35:01.941454       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1207 23:35:01.942552       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1207 23:35:02.263438       1 shared_informer.go:318] Caches are synced for garbage collector
	I1207 23:35:02.289806       1 shared_informer.go:318] Caches are synced for garbage collector
	I1207 23:35:02.289857       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1207 23:35:05.118433       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.5µs"
	I1207 23:35:06.123536       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.651µs"
	I1207 23:35:07.128292       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.817µs"
	I1207 23:35:08.136932       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.667945ms"
	I1207 23:35:08.137153       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.754µs"
	I1207 23:35:24.328488       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.269602ms"
	I1207 23:35:24.328606       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.214µs"
	I1207 23:35:25.183126       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.23µs"
	I1207 23:35:32.152319       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.659µs"
	
	
	==> kube-proxy [7ac02f5275ac14463e5fd58a2169b7fdf2d51dd9e8b7dc1f1fab2b5d1e42f235] <==
	I1207 23:34:49.483237       1 server_others.go:69] "Using iptables proxy"
	I1207 23:34:49.493194       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1207 23:34:49.511968       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:34:49.514868       1 server_others.go:152] "Using iptables Proxier"
	I1207 23:34:49.514910       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1207 23:34:49.514921       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1207 23:34:49.514962       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1207 23:34:49.515223       1 server.go:846] "Version info" version="v1.28.0"
	I1207 23:34:49.515282       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:34:49.516127       1 config.go:97] "Starting endpoint slice config controller"
	I1207 23:34:49.516658       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1207 23:34:49.516248       1 config.go:188] "Starting service config controller"
	I1207 23:34:49.516814       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1207 23:34:49.516586       1 config.go:315] "Starting node config controller"
	I1207 23:34:49.516864       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1207 23:34:49.617660       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1207 23:34:49.618451       1 shared_informer.go:318] Caches are synced for service config
	I1207 23:34:49.620063       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [a21fad74c0501472726aa964a8eae6cf6097ab2ad2cc7f048b4b2e442c8ec636] <==
	I1207 23:34:47.292890       1 serving.go:348] Generated self-signed cert in-memory
	W1207 23:34:48.967539       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 23:34:48.967586       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 23:34:48.967603       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 23:34:48.967614       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 23:34:49.015453       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1207 23:34:49.015515       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:34:49.019249       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:34:49.019286       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1207 23:34:49.021316       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1207 23:34:49.021428       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1207 23:34:49.119490       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 07 23:35:01 old-k8s-version-320477 kubelet[727]: I1207 23:35:01.839733     727 topology_manager.go:215] "Topology Admit Handler" podUID="1c93ee9e-303c-45f3-85db-45aa00340c87" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-ksnsk"
	Dec 07 23:35:01 old-k8s-version-320477 kubelet[727]: I1207 23:35:01.956458     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1c93ee9e-303c-45f3-85db-45aa00340c87-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-ksnsk\" (UID: \"1c93ee9e-303c-45f3-85db-45aa00340c87\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ksnsk"
	Dec 07 23:35:01 old-k8s-version-320477 kubelet[727]: I1207 23:35:01.956505     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/990e3703-ccdc-419b-9739-4009d4eef45d-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-p5lgr\" (UID: \"990e3703-ccdc-419b-9739-4009d4eef45d\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-p5lgr"
	Dec 07 23:35:01 old-k8s-version-320477 kubelet[727]: I1207 23:35:01.956537     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5j4b\" (UniqueName: \"kubernetes.io/projected/990e3703-ccdc-419b-9739-4009d4eef45d-kube-api-access-h5j4b\") pod \"kubernetes-dashboard-8694d4445c-p5lgr\" (UID: \"990e3703-ccdc-419b-9739-4009d4eef45d\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-p5lgr"
	Dec 07 23:35:01 old-k8s-version-320477 kubelet[727]: I1207 23:35:01.956698     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjpp7\" (UniqueName: \"kubernetes.io/projected/1c93ee9e-303c-45f3-85db-45aa00340c87-kube-api-access-mjpp7\") pod \"dashboard-metrics-scraper-5f989dc9cf-ksnsk\" (UID: \"1c93ee9e-303c-45f3-85db-45aa00340c87\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ksnsk"
	Dec 07 23:35:05 old-k8s-version-320477 kubelet[727]: I1207 23:35:05.106056     727 scope.go:117] "RemoveContainer" containerID="a6a2217224e189b80aa48bf8f1fb1a2f648cc2077b29b228c6988af4b9496ec8"
	Dec 07 23:35:06 old-k8s-version-320477 kubelet[727]: I1207 23:35:06.110001     727 scope.go:117] "RemoveContainer" containerID="a6a2217224e189b80aa48bf8f1fb1a2f648cc2077b29b228c6988af4b9496ec8"
	Dec 07 23:35:06 old-k8s-version-320477 kubelet[727]: I1207 23:35:06.110184     727 scope.go:117] "RemoveContainer" containerID="0525b9e594e4b95cd54e7455a340083b94c2548aed57b0c0964ba689f8a815be"
	Dec 07 23:35:06 old-k8s-version-320477 kubelet[727]: E1207 23:35:06.110586     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ksnsk_kubernetes-dashboard(1c93ee9e-303c-45f3-85db-45aa00340c87)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ksnsk" podUID="1c93ee9e-303c-45f3-85db-45aa00340c87"
	Dec 07 23:35:07 old-k8s-version-320477 kubelet[727]: I1207 23:35:07.114708     727 scope.go:117] "RemoveContainer" containerID="0525b9e594e4b95cd54e7455a340083b94c2548aed57b0c0964ba689f8a815be"
	Dec 07 23:35:07 old-k8s-version-320477 kubelet[727]: E1207 23:35:07.115061     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ksnsk_kubernetes-dashboard(1c93ee9e-303c-45f3-85db-45aa00340c87)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ksnsk" podUID="1c93ee9e-303c-45f3-85db-45aa00340c87"
	Dec 07 23:35:08 old-k8s-version-320477 kubelet[727]: I1207 23:35:08.130608     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-p5lgr" podStartSLOduration=1.234558298 podCreationTimestamp="2025-12-07 23:35:01 +0000 UTC" firstStartedPulling="2025-12-07 23:35:02.167571226 +0000 UTC m=+16.238450781" lastFinishedPulling="2025-12-07 23:35:08.063552402 +0000 UTC m=+22.134431960" observedRunningTime="2025-12-07 23:35:08.129995523 +0000 UTC m=+22.200875083" watchObservedRunningTime="2025-12-07 23:35:08.130539477 +0000 UTC m=+22.201419037"
	Dec 07 23:35:12 old-k8s-version-320477 kubelet[727]: I1207 23:35:12.141668     727 scope.go:117] "RemoveContainer" containerID="0525b9e594e4b95cd54e7455a340083b94c2548aed57b0c0964ba689f8a815be"
	Dec 07 23:35:12 old-k8s-version-320477 kubelet[727]: E1207 23:35:12.142086     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ksnsk_kubernetes-dashboard(1c93ee9e-303c-45f3-85db-45aa00340c87)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ksnsk" podUID="1c93ee9e-303c-45f3-85db-45aa00340c87"
	Dec 07 23:35:20 old-k8s-version-320477 kubelet[727]: I1207 23:35:20.152159     727 scope.go:117] "RemoveContainer" containerID="48fc3f42e00b15030c847b6ceb34f41299df9ffdebfb2d4eff9f587834a6f337"
	Dec 07 23:35:25 old-k8s-version-320477 kubelet[727]: I1207 23:35:25.033585     727 scope.go:117] "RemoveContainer" containerID="0525b9e594e4b95cd54e7455a340083b94c2548aed57b0c0964ba689f8a815be"
	Dec 07 23:35:25 old-k8s-version-320477 kubelet[727]: I1207 23:35:25.167147     727 scope.go:117] "RemoveContainer" containerID="0525b9e594e4b95cd54e7455a340083b94c2548aed57b0c0964ba689f8a815be"
	Dec 07 23:35:25 old-k8s-version-320477 kubelet[727]: I1207 23:35:25.167416     727 scope.go:117] "RemoveContainer" containerID="8b580c253981d8b8c79bb5abf64e0fc2d20cb1697c918a63e8051b60454e5e75"
	Dec 07 23:35:25 old-k8s-version-320477 kubelet[727]: E1207 23:35:25.167801     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ksnsk_kubernetes-dashboard(1c93ee9e-303c-45f3-85db-45aa00340c87)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ksnsk" podUID="1c93ee9e-303c-45f3-85db-45aa00340c87"
	Dec 07 23:35:32 old-k8s-version-320477 kubelet[727]: I1207 23:35:32.142548     727 scope.go:117] "RemoveContainer" containerID="8b580c253981d8b8c79bb5abf64e0fc2d20cb1697c918a63e8051b60454e5e75"
	Dec 07 23:35:32 old-k8s-version-320477 kubelet[727]: E1207 23:35:32.142901     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-ksnsk_kubernetes-dashboard(1c93ee9e-303c-45f3-85db-45aa00340c87)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ksnsk" podUID="1c93ee9e-303c-45f3-85db-45aa00340c87"
	Dec 07 23:35:38 old-k8s-version-320477 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 07 23:35:38 old-k8s-version-320477 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 07 23:35:38 old-k8s-version-320477 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 07 23:35:38 old-k8s-version-320477 systemd[1]: kubelet.service: Consumed 1.538s CPU time.
	
	
	==> kubernetes-dashboard [ce7324d8aac62ae7c0aa0221635e72e96bfcd16abd09a61ad8cef4c7e66ca07f] <==
	2025/12/07 23:35:08 Using namespace: kubernetes-dashboard
	2025/12/07 23:35:08 Using in-cluster config to connect to apiserver
	2025/12/07 23:35:08 Using secret token for csrf signing
	2025/12/07 23:35:08 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/07 23:35:08 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/07 23:35:08 Successful initial request to the apiserver, version: v1.28.0
	2025/12/07 23:35:08 Generating JWE encryption key
	2025/12/07 23:35:08 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/07 23:35:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/07 23:35:08 Initializing JWE encryption key from synchronized object
	2025/12/07 23:35:08 Creating in-cluster Sidecar client
	2025/12/07 23:35:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/07 23:35:08 Serving insecurely on HTTP port: 9090
	2025/12/07 23:35:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/07 23:35:08 Starting overwatch
	
	
	==> storage-provisioner [48fc3f42e00b15030c847b6ceb34f41299df9ffdebfb2d4eff9f587834a6f337] <==
	I1207 23:34:49.442821       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1207 23:35:19.446106       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [4b439bad9ad85b6dcd7bc9ce303a25519ec7b97359492cd12f2b5f913bfe91d6] <==
	I1207 23:35:20.220715       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 23:35:20.237541       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 23:35:20.237622       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1207 23:35:37.633470       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 23:35:37.633538       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9ac3ae20-044f-4c8f-a42d-d1ab1a68535f", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-320477_02b14718-4d0e-461e-8c9d-be5500cb1767 became leader
	I1207 23:35:37.633698       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-320477_02b14718-4d0e-461e-8c9d-be5500cb1767!
	I1207 23:35:37.733990       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-320477_02b14718-4d0e-461e-8c9d-be5500cb1767!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-320477 -n old-k8s-version-320477
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-320477 -n old-k8s-version-320477: exit status 2 (331.364657ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-320477 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-654118 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-654118 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (537.060329ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:35:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-654118 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-654118 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-654118 describe deploy/metrics-server -n kube-system: exit status 1 (63.916361ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-654118 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-654118
helpers_test.go:243: (dbg) docker inspect embed-certs-654118:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c652041fdce083d2960416540159c52a229547c9c1d310673112a81f91cd7e06",
	        "Created": "2025-12-07T23:34:44.331761062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 649887,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:34:44.367575522Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/c652041fdce083d2960416540159c52a229547c9c1d310673112a81f91cd7e06/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c652041fdce083d2960416540159c52a229547c9c1d310673112a81f91cd7e06/hostname",
	        "HostsPath": "/var/lib/docker/containers/c652041fdce083d2960416540159c52a229547c9c1d310673112a81f91cd7e06/hosts",
	        "LogPath": "/var/lib/docker/containers/c652041fdce083d2960416540159c52a229547c9c1d310673112a81f91cd7e06/c652041fdce083d2960416540159c52a229547c9c1d310673112a81f91cd7e06-json.log",
	        "Name": "/embed-certs-654118",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-654118:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-654118",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c652041fdce083d2960416540159c52a229547c9c1d310673112a81f91cd7e06",
	                "LowerDir": "/var/lib/docker/overlay2/b033e7e02e0290ed765f992d60e4a6dc2240c75ef7b2064b0c47febefaf70b5f-init/diff:/var/lib/docker/overlay2/d2e9c5481c0f5ed3745e4b3c85b207e8e3f273f5a1d285f7bc7bfa20976ad16e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b033e7e02e0290ed765f992d60e4a6dc2240c75ef7b2064b0c47febefaf70b5f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b033e7e02e0290ed765f992d60e4a6dc2240c75ef7b2064b0c47febefaf70b5f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b033e7e02e0290ed765f992d60e4a6dc2240c75ef7b2064b0c47febefaf70b5f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-654118",
	                "Source": "/var/lib/docker/volumes/embed-certs-654118/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-654118",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-654118",
	                "name.minikube.sigs.k8s.io": "embed-certs-654118",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "68b4fbd94323fb1fc944f445978eac86f7896f94a5e38425d5c5775c9e04e57e",
	            "SandboxKey": "/var/run/docker/netns/68b4fbd94323",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-654118": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eae277504c57bb79a350439d5c756b806a60082b42083657979990253737dde6",
	                    "EndpointID": "c7d4913cc40aa6426f38ccd756f43e486350b73bfe90cd7e930cd0b26d029006",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "c2:32:5c:e9:16:13",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-654118",
	                        "c652041fdce0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-654118 -n embed-certs-654118
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-654118 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-654118 logs -n 25: (1.733858245s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p old-k8s-version-320477 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:34 UTC │
	│ start   │ -p cert-expiration-612608 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                            │ cert-expiration-612608       │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:33 UTC │
	│ delete  │ -p cert-expiration-612608                                                                                                                                                                                                                            │ cert-expiration-612608       │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:33 UTC │
	│ start   │ -p no-preload-313006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:34 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-320477 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │                     │
	│ stop    │ -p old-k8s-version-320477 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:34 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-320477 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:34 UTC │
	│ start   │ -p old-k8s-version-320477 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:35 UTC │
	│ delete  │ -p stopped-upgrade-604160                                                                                                                                                                                                                            │ stopped-upgrade-604160       │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:34 UTC │
	│ start   │ -p embed-certs-654118 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-313006 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │                     │
	│ stop    │ -p no-preload-313006 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:35 UTC │
	│ addons  │ enable dashboard -p no-preload-313006 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p no-preload-313006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ image   │ old-k8s-version-320477 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ pause   │ -p old-k8s-version-320477 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ delete  │ -p old-k8s-version-320477                                                                                                                                                                                                                            │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p kubernetes-upgrade-703538 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-703538    │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ start   │ -p kubernetes-upgrade-703538 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-703538    │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ delete  │ -p old-k8s-version-320477                                                                                                                                                                                                                            │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ delete  │ -p disable-driver-mounts-837628                                                                                                                                                                                                                      │ disable-driver-mounts-837628 │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p default-k8s-diff-port-312944 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-312944 │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-703538                                                                                                                                                                                                                         │ kubernetes-upgrade-703538    │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p newest-cni-858719 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-654118 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:35:57
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:35:57.632024  665837 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:35:57.632148  665837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:35:57.632156  665837 out.go:374] Setting ErrFile to fd 2...
	I1207 23:35:57.632161  665837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:35:57.632377  665837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:35:57.632861  665837 out.go:368] Setting JSON to false
	I1207 23:35:57.634012  665837 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8302,"bootTime":1765142256,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:35:57.634076  665837 start.go:143] virtualization: kvm guest
	I1207 23:35:57.636222  665837 out.go:179] * [newest-cni-858719] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:35:57.637537  665837 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:35:57.637601  665837 notify.go:221] Checking for updates...
	I1207 23:35:57.639940  665837 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:35:57.641367  665837 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:35:57.642948  665837 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:35:57.644532  665837 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:35:57.646052  665837 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:35:57.648187  665837 config.go:182] Loaded profile config "default-k8s-diff-port-312944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:35:57.648313  665837 config.go:182] Loaded profile config "embed-certs-654118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:35:57.648457  665837 config.go:182] Loaded profile config "no-preload-313006": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:35:57.648599  665837 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:35:57.673911  665837 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:35:57.674024  665837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:35:57.734477  665837 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-07 23:35:57.723366575 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:35:57.734675  665837 docker.go:319] overlay module found
	I1207 23:35:57.736420  665837 out.go:179] * Using the docker driver based on user configuration
	I1207 23:35:57.737776  665837 start.go:309] selected driver: docker
	I1207 23:35:57.737790  665837 start.go:927] validating driver "docker" against <nil>
	I1207 23:35:57.737804  665837 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:35:57.738404  665837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:35:57.800588  665837 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-07 23:35:57.790071121 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:35:57.800741  665837 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1207 23:35:57.800777  665837 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1207 23:35:57.801009  665837 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1207 23:35:57.803264  665837 out.go:179] * Using Docker driver with root privileges
	I1207 23:35:57.804574  665837 cni.go:84] Creating CNI manager for ""
	I1207 23:35:57.804645  665837 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:35:57.804658  665837 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1207 23:35:57.804753  665837 start.go:353] cluster config:
	{Name:newest-cni-858719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:35:57.806078  665837 out.go:179] * Starting "newest-cni-858719" primary control-plane node in "newest-cni-858719" cluster
	I1207 23:35:57.807183  665837 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:35:57.808220  665837 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:35:57.809361  665837 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1207 23:35:57.809405  665837 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1207 23:35:57.809414  665837 cache.go:65] Caching tarball of preloaded images
	I1207 23:35:57.809487  665837 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:35:57.809505  665837 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:35:57.809517  665837 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1207 23:35:57.809622  665837 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/config.json ...
	I1207 23:35:57.809642  665837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/config.json: {Name:mk58abd3aba696b237e078949efd134e91598be6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:35:57.833660  665837 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:35:57.833685  665837 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:35:57.833709  665837 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:35:57.833753  665837 start.go:360] acquireMachinesLock for newest-cni-858719: {Name:mk3f9783a06cd72eff911e9615fc59e854b06695 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:35:57.834702  665837 start.go:364] duration metric: took 917.515µs to acquireMachinesLock for "newest-cni-858719"
	I1207 23:35:57.834748  665837 start.go:93] Provisioning new machine with config: &{Name:newest-cni-858719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:35:57.834842  665837 start.go:125] createHost starting for "" (driver="docker")
	I1207 23:35:57.086490  656318 pod_ready.go:94] pod "kube-controller-manager-no-preload-313006" is "Ready"
	I1207 23:35:57.086526  656318 pod_ready.go:86] duration metric: took 184.52001ms for pod "kube-controller-manager-no-preload-313006" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:57.285885  656318 pod_ready.go:83] waiting for pod "kube-proxy-xw4pf" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:57.686019  656318 pod_ready.go:94] pod "kube-proxy-xw4pf" is "Ready"
	I1207 23:35:57.686045  656318 pod_ready.go:86] duration metric: took 400.132494ms for pod "kube-proxy-xw4pf" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:57.886678  656318 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-313006" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:58.286146  656318 pod_ready.go:94] pod "kube-scheduler-no-preload-313006" is "Ready"
	I1207 23:35:58.286179  656318 pod_ready.go:86] duration metric: took 399.470825ms for pod "kube-scheduler-no-preload-313006" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:58.286194  656318 pod_ready.go:40] duration metric: took 36.408049997s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:35:58.340973  656318 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1207 23:35:58.342970  656318 out.go:179] * Done! kubectl is now configured to use "no-preload-313006" cluster and "default" namespace by default
	I1207 23:35:53.513092  663227 cli_runner.go:164] Run: docker exec default-k8s-diff-port-312944 stat /var/lib/dpkg/alternatives/iptables
	I1207 23:35:53.569664  663227 oci.go:144] the created container "default-k8s-diff-port-312944" has a running status.
	I1207 23:35:53.569699  663227 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa...
	I1207 23:35:53.616194  663227 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1207 23:35:53.654955  663227 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-312944 --format={{.State.Status}}
	I1207 23:35:53.677815  663227 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1207 23:35:53.677836  663227 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-312944 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1207 23:35:53.734256  663227 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-312944 --format={{.State.Status}}
	I1207 23:35:53.756551  663227 machine.go:94] provisionDockerMachine start ...
	I1207 23:35:53.756699  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:53.777538  663227 main.go:143] libmachine: Using SSH client type: native
	I1207 23:35:53.777885  663227 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1207 23:35:53.777903  663227 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:35:53.778498  663227 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33252->127.0.0.1:33453: read: connection reset by peer
	I1207 23:35:56.912375  663227 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-312944
	
	I1207 23:35:56.912407  663227 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-312944"
	I1207 23:35:56.912481  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:56.933722  663227 main.go:143] libmachine: Using SSH client type: native
	I1207 23:35:56.933966  663227 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1207 23:35:56.933978  663227 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-312944 && echo "default-k8s-diff-port-312944" | sudo tee /etc/hostname
	I1207 23:35:57.089581  663227 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-312944
	
	I1207 23:35:57.089671  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:57.108882  663227 main.go:143] libmachine: Using SSH client type: native
	I1207 23:35:57.109181  663227 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1207 23:35:57.109209  663227 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-312944' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-312944/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-312944' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:35:57.239382  663227 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:35:57.239416  663227 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:35:57.239450  663227 ubuntu.go:190] setting up certificates
	I1207 23:35:57.239464  663227 provision.go:84] configureAuth start
	I1207 23:35:57.239537  663227 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-312944
	I1207 23:35:57.259204  663227 provision.go:143] copyHostCerts
	I1207 23:35:57.259266  663227 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:35:57.259275  663227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:35:57.259370  663227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:35:57.259494  663227 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:35:57.259504  663227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:35:57.259547  663227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:35:57.259610  663227 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:35:57.259617  663227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:35:57.259644  663227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:35:57.259709  663227 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-312944 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-312944 localhost minikube]
	I1207 23:35:57.380006  663227 provision.go:177] copyRemoteCerts
	I1207 23:35:57.380201  663227 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:35:57.380362  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:57.400600  663227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:35:57.514388  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:35:57.542751  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1207 23:35:57.561877  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 23:35:57.580928  663227 provision.go:87] duration metric: took 341.449385ms to configureAuth
	I1207 23:35:57.580959  663227 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:35:57.581113  663227 config.go:182] Loaded profile config "default-k8s-diff-port-312944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:35:57.581208  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:57.601315  663227 main.go:143] libmachine: Using SSH client type: native
	I1207 23:35:57.601571  663227 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1207 23:35:57.601587  663227 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:35:57.900137  663227 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:35:57.900168  663227 machine.go:97] duration metric: took 4.143590275s to provisionDockerMachine
	I1207 23:35:57.900181  663227 client.go:176] duration metric: took 9.197426744s to LocalClient.Create
	I1207 23:35:57.900203  663227 start.go:167] duration metric: took 9.197496265s to libmachine.API.Create "default-k8s-diff-port-312944"
	I1207 23:35:57.900219  663227 start.go:293] postStartSetup for "default-k8s-diff-port-312944" (driver="docker")
	I1207 23:35:57.900236  663227 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:35:57.900318  663227 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:35:57.900402  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:57.923155  663227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:35:58.027598  663227 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:35:58.031781  663227 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:35:58.031812  663227 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:35:58.031825  663227 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:35:58.031877  663227 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:35:58.031973  663227 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:35:58.032092  663227 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:35:58.040270  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:35:58.063465  663227 start.go:296] duration metric: took 163.229604ms for postStartSetup
	I1207 23:35:58.063866  663227 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-312944
	I1207 23:35:58.087920  663227 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/config.json ...
	I1207 23:35:58.088259  663227 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:35:58.088304  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:58.109143  663227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:35:58.205010  663227 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:35:58.210080  663227 start.go:128] duration metric: took 9.50986297s to createHost
	I1207 23:35:58.210108  663227 start.go:83] releasing machines lock for "default-k8s-diff-port-312944", held for 9.51001628s
	I1207 23:35:58.210186  663227 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-312944
	I1207 23:35:58.230428  663227 ssh_runner.go:195] Run: cat /version.json
	I1207 23:35:58.230495  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:58.230505  663227 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:35:58.230600  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:58.251090  663227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:35:58.251094  663227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:35:58.351196  663227 ssh_runner.go:195] Run: systemctl --version
	I1207 23:35:58.444301  663227 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:35:58.490592  663227 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	
	
	==> CRI-O <==
	Dec 07 23:35:46 embed-certs-654118 crio[768]: time="2025-12-07T23:35:46.30319287Z" level=info msg="Starting container: 6be35e543135b5f87f3b26be16d2fe9f5533fdc119a5a4275b9f99868539eb88" id=0e4080e6-b913-4e89-8618-3c296a795f94 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:35:46 embed-certs-654118 crio[768]: time="2025-12-07T23:35:46.305736207Z" level=info msg="Started container" PID=1861 containerID=6be35e543135b5f87f3b26be16d2fe9f5533fdc119a5a4275b9f99868539eb88 description=kube-system/coredns-66bc5c9577-wvgqf/coredns id=0e4080e6-b913-4e89-8618-3c296a795f94 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ab3fe9355dd88c0e99ca270650155ab19416106d0d8c5ff0579479164e0d6c5
	Dec 07 23:35:49 embed-certs-654118 crio[768]: time="2025-12-07T23:35:49.388816774Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b600f826-0b74-43ea-9206-104a79af7b3f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 23:35:49 embed-certs-654118 crio[768]: time="2025-12-07T23:35:49.388901795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:35:49 embed-certs-654118 crio[768]: time="2025-12-07T23:35:49.394875125Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:247b019b614fa40c96e233211c3a1bcc03392e778a17d8337586297d8b24d34c UID:64a194a3-ffb4-468c-a744-5215164f87c1 NetNS:/var/run/netns/21864a11-5a33-4ced-8702-53b0c1cda0f3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009063f8}] Aliases:map[]}"
	Dec 07 23:35:49 embed-certs-654118 crio[768]: time="2025-12-07T23:35:49.394909566Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 07 23:35:49 embed-certs-654118 crio[768]: time="2025-12-07T23:35:49.408299929Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:247b019b614fa40c96e233211c3a1bcc03392e778a17d8337586297d8b24d34c UID:64a194a3-ffb4-468c-a744-5215164f87c1 NetNS:/var/run/netns/21864a11-5a33-4ced-8702-53b0c1cda0f3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009063f8}] Aliases:map[]}"
	Dec 07 23:35:49 embed-certs-654118 crio[768]: time="2025-12-07T23:35:49.408513027Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 07 23:35:49 embed-certs-654118 crio[768]: time="2025-12-07T23:35:49.409734974Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 07 23:35:49 embed-certs-654118 crio[768]: time="2025-12-07T23:35:49.41100886Z" level=info msg="Ran pod sandbox 247b019b614fa40c96e233211c3a1bcc03392e778a17d8337586297d8b24d34c with infra container: default/busybox/POD" id=b600f826-0b74-43ea-9206-104a79af7b3f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 23:35:49 embed-certs-654118 crio[768]: time="2025-12-07T23:35:49.41238399Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=baf4d92e-9614-4ba8-b373-3fb2774bfb04 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:35:49 embed-certs-654118 crio[768]: time="2025-12-07T23:35:49.412551613Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=baf4d92e-9614-4ba8-b373-3fb2774bfb04 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:35:49 embed-certs-654118 crio[768]: time="2025-12-07T23:35:49.41260414Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=baf4d92e-9614-4ba8-b373-3fb2774bfb04 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:35:49 embed-certs-654118 crio[768]: time="2025-12-07T23:35:49.413363636Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0a859a55-b067-41db-8c88-8eb7a7ed243b name=/runtime.v1.ImageService/PullImage
	Dec 07 23:35:49 embed-certs-654118 crio[768]: time="2025-12-07T23:35:49.415439176Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 07 23:35:53 embed-certs-654118 crio[768]: time="2025-12-07T23:35:53.026927123Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=0a859a55-b067-41db-8c88-8eb7a7ed243b name=/runtime.v1.ImageService/PullImage
	Dec 07 23:35:53 embed-certs-654118 crio[768]: time="2025-12-07T23:35:53.02775866Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4cb13655-4df9-4611-b04d-8a04a8190dab name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:35:53 embed-certs-654118 crio[768]: time="2025-12-07T23:35:53.029628613Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8969fc05-016e-46fd-af20-3a8726d933c4 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:35:53 embed-certs-654118 crio[768]: time="2025-12-07T23:35:53.033524487Z" level=info msg="Creating container: default/busybox/busybox" id=37038ab7-22fb-484d-a7ee-9d504c7958ed name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:35:53 embed-certs-654118 crio[768]: time="2025-12-07T23:35:53.033656383Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:35:53 embed-certs-654118 crio[768]: time="2025-12-07T23:35:53.037869061Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:35:53 embed-certs-654118 crio[768]: time="2025-12-07T23:35:53.038401579Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:35:53 embed-certs-654118 crio[768]: time="2025-12-07T23:35:53.061143812Z" level=info msg="Created container af9004f573baafc6ba6aadfc1d44e04a74d7261398e5cefff0955a169126889a: default/busybox/busybox" id=37038ab7-22fb-484d-a7ee-9d504c7958ed name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:35:53 embed-certs-654118 crio[768]: time="2025-12-07T23:35:53.061869725Z" level=info msg="Starting container: af9004f573baafc6ba6aadfc1d44e04a74d7261398e5cefff0955a169126889a" id=7906e4f4-639f-4b8a-823a-fc13a45c7d36 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:35:53 embed-certs-654118 crio[768]: time="2025-12-07T23:35:53.064404669Z" level=info msg="Started container" PID=1940 containerID=af9004f573baafc6ba6aadfc1d44e04a74d7261398e5cefff0955a169126889a description=default/busybox/busybox id=7906e4f4-639f-4b8a-823a-fc13a45c7d36 name=/runtime.v1.RuntimeService/StartContainer sandboxID=247b019b614fa40c96e233211c3a1bcc03392e778a17d8337586297d8b24d34c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	af9004f573baa       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago        Running             busybox                   0                   247b019b614fa       busybox                                      default
	6be35e543135b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      14 seconds ago       Running             coredns                   0                   7ab3fe9355dd8       coredns-66bc5c9577-wvgqf                     kube-system
	af6c56acc3732       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago       Running             storage-provisioner       0                   a2fcbfe10c8d3       storage-provisioner                          kube-system
	24b02f20acf3f       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      56 seconds ago       Running             kube-proxy                0                   56b37a242848c       kube-proxy-l75b2                             kube-system
	c0ebe63234307       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      56 seconds ago       Running             kindnet-cni               0                   5bb4bdba925f9       kindnet-68q87                                kube-system
	f74d1aa292e82       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      About a minute ago   Running             kube-apiserver            0                   5f2aea4694206       kube-apiserver-embed-certs-654118            kube-system
	592893d2c62e0       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      About a minute ago   Running             etcd                      0                   bb9a74aac5c90       etcd-embed-certs-654118                      kube-system
	7599461dcdc33       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      About a minute ago   Running             kube-scheduler            0                   e0b28f486e109       kube-scheduler-embed-certs-654118            kube-system
	ac95958070df5       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      About a minute ago   Running             kube-controller-manager   0                   9d1776e558691       kube-controller-manager-embed-certs-654118   kube-system
	
	
	==> coredns [6be35e543135b5f87f3b26be16d2fe9f5533fdc119a5a4275b9f99868539eb88] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56729 - 10142 "HINFO IN 6918123703969321736.6657731914959951789. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021871949s
	
	
	==> describe nodes <==
	Name:               embed-certs-654118
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-654118
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=embed-certs-654118
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_34_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:34:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-654118
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:36:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:35:59 +0000   Sun, 07 Dec 2025 23:34:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:35:59 +0000   Sun, 07 Dec 2025 23:34:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:35:59 +0000   Sun, 07 Dec 2025 23:34:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:35:59 +0000   Sun, 07 Dec 2025 23:35:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-654118
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                03c8ca8e-58f6-4b1a-acac-362ecdda585b
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-wvgqf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     57s
	  kube-system                 etcd-embed-certs-654118                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         63s
	  kube-system                 kindnet-68q87                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-embed-certs-654118             250m (3%)     0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-controller-manager-embed-certs-654118    200m (2%)     0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-proxy-l75b2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-embed-certs-654118             100m (1%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  67s (x8 over 67s)  kubelet          Node embed-certs-654118 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    67s (x8 over 67s)  kubelet          Node embed-certs-654118 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     67s (x8 over 67s)  kubelet          Node embed-certs-654118 status is now: NodeHasSufficientPID
	  Normal  Starting                 63s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s                kubelet          Node embed-certs-654118 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s                kubelet          Node embed-certs-654118 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s                kubelet          Node embed-certs-654118 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           58s                node-controller  Node embed-certs-654118 event: Registered Node embed-certs-654118 in Controller
	  Normal  NodeReady                16s                kubelet          Node embed-certs-654118 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.006319] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.495443] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006323] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494714] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006745] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494455] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007157] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493953] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007413] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493695] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007143] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493798] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007702] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493076] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008458] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493060] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008891] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492811] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007996] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493243] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008588] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492559] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008931] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.491699] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.010378] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	
	
	==> etcd [592893d2c62e092d1b668539698dee2cf8d2f9db7de2b5aa0324d57b98887b9c] <==
	{"level":"warn","ts":"2025-12-07T23:34:55.843522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:55.853133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:55.861665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:55.869660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:55.878930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:55.888124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:55.898711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:55.906772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:55.915185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:55.923369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:55.931279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:55.939256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:55.947544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:55.954993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:55.963268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:55.971262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:55.988517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:55.996517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:56.004727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:34:56.072683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37164","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T23:35:52.484267Z","caller":"traceutil/trace.go:172","msg":"trace[961778840] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"125.425154ms","start":"2025-12-07T23:35:52.358823Z","end":"2025-12-07T23:35:52.484248Z","steps":["trace[961778840] 'process raft request'  (duration: 125.295208ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-07T23:36:00.108104Z","caller":"traceutil/trace.go:172","msg":"trace[1761122423] transaction","detail":"{read_only:false; response_revision:482; number_of_response:1; }","duration":"154.672483ms","start":"2025-12-07T23:35:59.953414Z","end":"2025-12-07T23:36:00.108087Z","steps":["trace[1761122423] 'process raft request'  (duration: 154.454926ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-07T23:36:00.236658Z","caller":"traceutil/trace.go:172","msg":"trace[2017915292] transaction","detail":"{read_only:false; response_revision:483; number_of_response:1; }","duration":"172.407101ms","start":"2025-12-07T23:36:00.064227Z","end":"2025-12-07T23:36:00.236634Z","steps":["trace[2017915292] 'process raft request'  (duration: 166.539868ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-07T23:36:00.452821Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.119554ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-07T23:36:00.452910Z","caller":"traceutil/trace.go:172","msg":"trace[1026050191] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:483; }","duration":"112.21759ms","start":"2025-12-07T23:36:00.340673Z","end":"2025-12-07T23:36:00.452891Z","steps":["trace[1026050191] 'range keys from in-memory index tree'  (duration: 112.060318ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:36:01 up  2:18,  0 user,  load average: 2.51, 2.22, 1.82
	Linux embed-certs-654118 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c0ebe63234307b62ad32a0740337464b6e6f4141b5bd54e3a76cfc409c976608] <==
	I1207 23:35:05.188147       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 23:35:05.188455       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1207 23:35:05.188660       1 main.go:148] setting mtu 1500 for CNI 
	I1207 23:35:05.188675       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 23:35:05.188697       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T23:35:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 23:35:05.393131       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 23:35:05.393174       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 23:35:05.393185       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 23:35:05.393482       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1207 23:35:35.393906       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1207 23:35:35.393916       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1207 23:35:35.393920       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1207 23:35:35.395195       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1207 23:35:36.593413       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 23:35:36.593445       1 metrics.go:72] Registering metrics
	I1207 23:35:36.593537       1 controller.go:711] "Syncing nftables rules"
	I1207 23:35:45.399562       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1207 23:35:45.399631       1 main.go:301] handling current node
	I1207 23:35:55.392415       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1207 23:35:55.392462       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f74d1aa292e82203a22a5866cc355c619250cf6f7b00318751d5732f92332376] <==
	E1207 23:34:56.703684       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1207 23:34:56.713812       1 controller.go:667] quota admission added evaluator for: namespaces
	I1207 23:34:56.718242       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:34:56.718243       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1207 23:34:56.727177       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:34:56.727402       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1207 23:34:56.906666       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 23:34:57.519107       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1207 23:34:57.523102       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1207 23:34:57.523120       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1207 23:34:58.018986       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 23:34:58.056654       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 23:34:58.120735       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1207 23:34:58.126816       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1207 23:34:58.127922       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 23:34:58.132089       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:34:58.582135       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 23:34:58.971067       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 23:34:58.983860       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1207 23:34:58.992205       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 23:35:04.335898       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:35:04.341146       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:35:04.540211       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 23:35:04.634880       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1207 23:35:59.194035       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:33278: use of closed network connection
	
	
	==> kube-controller-manager [ac95958070df584f251979eac457ca0b90f6989ed40f0821d3d3806f636f1abe] <==
	I1207 23:35:03.580359       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1207 23:35:03.580368       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1207 23:35:03.581489       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1207 23:35:03.581521       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1207 23:35:03.581549       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1207 23:35:03.581584       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1207 23:35:03.581586       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1207 23:35:03.581651       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1207 23:35:03.581618       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1207 23:35:03.581903       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1207 23:35:03.581938       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1207 23:35:03.581982       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1207 23:35:03.583368       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1207 23:35:03.584490       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1207 23:35:03.585080       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1207 23:35:03.585737       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1207 23:35:03.585812       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1207 23:35:03.585909       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-654118"
	I1207 23:35:03.585969       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1207 23:35:03.589121       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 23:35:03.589147       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 23:35:03.592414       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1207 23:35:03.601636       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1207 23:35:03.602398       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1207 23:35:48.591592       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [24b02f20acf3f4a919eae3bbfb7f505e80aca565009c16ff1b29769cb2907df6] <==
	I1207 23:35:05.051920       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:35:05.130435       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 23:35:05.231603       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 23:35:05.231676       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1207 23:35:05.231815       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:35:05.250660       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:35:05.250708       1 server_linux.go:132] "Using iptables Proxier"
	I1207 23:35:05.255823       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:35:05.256170       1 server.go:527] "Version info" version="v1.34.2"
	I1207 23:35:05.256194       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:35:05.257235       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:35:05.257270       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:35:05.257345       1 config.go:309] "Starting node config controller"
	I1207 23:35:05.257359       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:35:05.257414       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:35:05.257428       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:35:05.257410       1 config.go:200] "Starting service config controller"
	I1207 23:35:05.257461       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:35:05.357578       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:35:05.357565       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 23:35:05.357605       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 23:35:05.357603       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7599461dcdc3389d6846e3930e089fc676ec0c72802f6dbd7f31ad6a3bb10836] <==
	I1207 23:34:56.861527       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1207 23:34:56.862625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1207 23:34:56.863291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1207 23:34:56.863728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1207 23:34:56.863768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1207 23:34:56.863685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1207 23:34:56.863830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1207 23:34:56.863862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1207 23:34:56.863973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1207 23:34:56.864261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1207 23:34:56.864312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1207 23:34:56.864529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 23:34:56.865001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1207 23:34:56.865035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1207 23:34:56.865103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1207 23:34:56.865166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1207 23:34:56.865245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 23:34:56.865310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1207 23:34:56.865412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1207 23:34:56.865529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1207 23:34:57.753583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1207 23:34:57.756881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1207 23:34:57.769269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1207 23:34:57.812361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1207 23:34:59.962001       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 07 23:34:59 embed-certs-654118 kubelet[1326]: E1207 23:34:59.847630    1326 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-embed-certs-654118\" already exists" pod="kube-system/etcd-embed-certs-654118"
	Dec 07 23:34:59 embed-certs-654118 kubelet[1326]: I1207 23:34:59.880362    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-654118" podStartSLOduration=2.880337528 podStartE2EDuration="2.880337528s" podCreationTimestamp="2025-12-07 23:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:34:59.864466652 +0000 UTC m=+1.128955788" watchObservedRunningTime="2025-12-07 23:34:59.880337528 +0000 UTC m=+1.144826623"
	Dec 07 23:34:59 embed-certs-654118 kubelet[1326]: I1207 23:34:59.888763    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-654118" podStartSLOduration=2.8887434929999998 podStartE2EDuration="2.888743493s" podCreationTimestamp="2025-12-07 23:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:34:59.88066753 +0000 UTC m=+1.145156637" watchObservedRunningTime="2025-12-07 23:34:59.888743493 +0000 UTC m=+1.153232613"
	Dec 07 23:34:59 embed-certs-654118 kubelet[1326]: I1207 23:34:59.888898    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-654118" podStartSLOduration=1.8888873080000002 podStartE2EDuration="1.888887308s" podCreationTimestamp="2025-12-07 23:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:34:59.888662357 +0000 UTC m=+1.153151497" watchObservedRunningTime="2025-12-07 23:34:59.888887308 +0000 UTC m=+1.153376425"
	Dec 07 23:34:59 embed-certs-654118 kubelet[1326]: I1207 23:34:59.898059    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-654118" podStartSLOduration=1.898039026 podStartE2EDuration="1.898039026s" podCreationTimestamp="2025-12-07 23:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:34:59.897892474 +0000 UTC m=+1.162381600" watchObservedRunningTime="2025-12-07 23:34:59.898039026 +0000 UTC m=+1.162528141"
	Dec 07 23:35:03 embed-certs-654118 kubelet[1326]: I1207 23:35:03.551711    1326 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 07 23:35:03 embed-certs-654118 kubelet[1326]: I1207 23:35:03.552516    1326 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 07 23:35:04 embed-certs-654118 kubelet[1326]: I1207 23:35:04.748411    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fc0d1b0-080b-4e1c-b7b4-cd23aa94620a-xtables-lock\") pod \"kindnet-68q87\" (UID: \"7fc0d1b0-080b-4e1c-b7b4-cd23aa94620a\") " pod="kube-system/kindnet-68q87"
	Dec 07 23:35:04 embed-certs-654118 kubelet[1326]: I1207 23:35:04.748468    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2f061a54-3641-473d-9c6a-77e51062e955-kube-proxy\") pod \"kube-proxy-l75b2\" (UID: \"2f061a54-3641-473d-9c6a-77e51062e955\") " pod="kube-system/kube-proxy-l75b2"
	Dec 07 23:35:04 embed-certs-654118 kubelet[1326]: I1207 23:35:04.748506    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9qln\" (UniqueName: \"kubernetes.io/projected/2f061a54-3641-473d-9c6a-77e51062e955-kube-api-access-m9qln\") pod \"kube-proxy-l75b2\" (UID: \"2f061a54-3641-473d-9c6a-77e51062e955\") " pod="kube-system/kube-proxy-l75b2"
	Dec 07 23:35:04 embed-certs-654118 kubelet[1326]: I1207 23:35:04.748576    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7fc0d1b0-080b-4e1c-b7b4-cd23aa94620a-cni-cfg\") pod \"kindnet-68q87\" (UID: \"7fc0d1b0-080b-4e1c-b7b4-cd23aa94620a\") " pod="kube-system/kindnet-68q87"
	Dec 07 23:35:04 embed-certs-654118 kubelet[1326]: I1207 23:35:04.748665    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f061a54-3641-473d-9c6a-77e51062e955-xtables-lock\") pod \"kube-proxy-l75b2\" (UID: \"2f061a54-3641-473d-9c6a-77e51062e955\") " pod="kube-system/kube-proxy-l75b2"
	Dec 07 23:35:04 embed-certs-654118 kubelet[1326]: I1207 23:35:04.748710    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f061a54-3641-473d-9c6a-77e51062e955-lib-modules\") pod \"kube-proxy-l75b2\" (UID: \"2f061a54-3641-473d-9c6a-77e51062e955\") " pod="kube-system/kube-proxy-l75b2"
	Dec 07 23:35:04 embed-certs-654118 kubelet[1326]: I1207 23:35:04.748746    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fc0d1b0-080b-4e1c-b7b4-cd23aa94620a-lib-modules\") pod \"kindnet-68q87\" (UID: \"7fc0d1b0-080b-4e1c-b7b4-cd23aa94620a\") " pod="kube-system/kindnet-68q87"
	Dec 07 23:35:04 embed-certs-654118 kubelet[1326]: I1207 23:35:04.748786    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpkpx\" (UniqueName: \"kubernetes.io/projected/7fc0d1b0-080b-4e1c-b7b4-cd23aa94620a-kube-api-access-vpkpx\") pod \"kindnet-68q87\" (UID: \"7fc0d1b0-080b-4e1c-b7b4-cd23aa94620a\") " pod="kube-system/kindnet-68q87"
	Dec 07 23:35:05 embed-certs-654118 kubelet[1326]: I1207 23:35:05.864719    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-68q87" podStartSLOduration=1.8646975590000001 podStartE2EDuration="1.864697559s" podCreationTimestamp="2025-12-07 23:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:35:05.864509638 +0000 UTC m=+7.128998770" watchObservedRunningTime="2025-12-07 23:35:05.864697559 +0000 UTC m=+7.129186675"
	Dec 07 23:35:05 embed-certs-654118 kubelet[1326]: I1207 23:35:05.878536    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l75b2" podStartSLOduration=1.878512991 podStartE2EDuration="1.878512991s" podCreationTimestamp="2025-12-07 23:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:35:05.878295397 +0000 UTC m=+7.142784539" watchObservedRunningTime="2025-12-07 23:35:05.878512991 +0000 UTC m=+7.143002107"
	Dec 07 23:35:45 embed-certs-654118 kubelet[1326]: I1207 23:35:45.882668    1326 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 07 23:35:45 embed-certs-654118 kubelet[1326]: I1207 23:35:45.968569    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/34685d0c-67b3-4683-b817-772fa2ef1c77-tmp\") pod \"storage-provisioner\" (UID: \"34685d0c-67b3-4683-b817-772fa2ef1c77\") " pod="kube-system/storage-provisioner"
	Dec 07 23:35:45 embed-certs-654118 kubelet[1326]: I1207 23:35:45.968621    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4476n\" (UniqueName: \"kubernetes.io/projected/34685d0c-67b3-4683-b817-772fa2ef1c77-kube-api-access-4476n\") pod \"storage-provisioner\" (UID: \"34685d0c-67b3-4683-b817-772fa2ef1c77\") " pod="kube-system/storage-provisioner"
	Dec 07 23:35:45 embed-certs-654118 kubelet[1326]: I1207 23:35:45.968647    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80c1683b-a66c-4dd4-8d91-0e5cc2bd5e18-config-volume\") pod \"coredns-66bc5c9577-wvgqf\" (UID: \"80c1683b-a66c-4dd4-8d91-0e5cc2bd5e18\") " pod="kube-system/coredns-66bc5c9577-wvgqf"
	Dec 07 23:35:45 embed-certs-654118 kubelet[1326]: I1207 23:35:45.968671    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd7lj\" (UniqueName: \"kubernetes.io/projected/80c1683b-a66c-4dd4-8d91-0e5cc2bd5e18-kube-api-access-pd7lj\") pod \"coredns-66bc5c9577-wvgqf\" (UID: \"80c1683b-a66c-4dd4-8d91-0e5cc2bd5e18\") " pod="kube-system/coredns-66bc5c9577-wvgqf"
	Dec 07 23:35:46 embed-certs-654118 kubelet[1326]: I1207 23:35:46.983010    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.982986816 podStartE2EDuration="42.982986816s" podCreationTimestamp="2025-12-07 23:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:35:46.982744511 +0000 UTC m=+48.247233619" watchObservedRunningTime="2025-12-07 23:35:46.982986816 +0000 UTC m=+48.247475931"
	Dec 07 23:35:46 embed-certs-654118 kubelet[1326]: I1207 23:35:46.983128    1326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wvgqf" podStartSLOduration=42.98312154 podStartE2EDuration="42.98312154s" podCreationTimestamp="2025-12-07 23:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:35:46.969856738 +0000 UTC m=+48.234345867" watchObservedRunningTime="2025-12-07 23:35:46.98312154 +0000 UTC m=+48.247610656"
	Dec 07 23:35:49 embed-certs-654118 kubelet[1326]: I1207 23:35:49.185888    1326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w87dg\" (UniqueName: \"kubernetes.io/projected/64a194a3-ffb4-468c-a744-5215164f87c1-kube-api-access-w87dg\") pod \"busybox\" (UID: \"64a194a3-ffb4-468c-a744-5215164f87c1\") " pod="default/busybox"
	
	
	==> storage-provisioner [af6c56acc3732045a14c44a2839a4ed51b416fad8baa69de455ea58654e0f600] <==
	I1207 23:35:46.301929       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 23:35:46.313285       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 23:35:46.313343       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1207 23:35:46.317364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:35:46.323288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 23:35:46.323531       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 23:35:46.323886       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-654118_d954dd40-5c59-4fac-b429-0b167e1118eb!
	I1207 23:35:46.323841       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69a0c6a4-6b58-458f-b7fc-bc544f9a2bed", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-654118_d954dd40-5c59-4fac-b429-0b167e1118eb became leader
	W1207 23:35:46.328491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:35:46.332535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 23:35:46.424661       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-654118_d954dd40-5c59-4fac-b429-0b167e1118eb!
	W1207 23:35:48.336402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:35:48.340727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:35:50.344787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:35:50.353152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:35:52.355926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:35:52.485534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:35:54.489103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:35:54.501538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:35:56.505054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:35:56.512024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:35:58.515420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:35:58.520159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:00.524089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:00.576099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-654118 -n embed-certs-654118
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-654118 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-313006 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-313006 --alsologtostderr -v=1: exit status 80 (1.883265166s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-313006 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:36:10.287761  670257 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:36:10.287902  670257 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:36:10.287918  670257 out.go:374] Setting ErrFile to fd 2...
	I1207 23:36:10.287924  670257 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:36:10.288322  670257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:36:10.288697  670257 out.go:368] Setting JSON to false
	I1207 23:36:10.288729  670257 mustload.go:66] Loading cluster: no-preload-313006
	I1207 23:36:10.289266  670257 config.go:182] Loaded profile config "no-preload-313006": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:36:10.289897  670257 cli_runner.go:164] Run: docker container inspect no-preload-313006 --format={{.State.Status}}
	I1207 23:36:10.319692  670257 host.go:66] Checking if "no-preload-313006" exists ...
	I1207 23:36:10.320084  670257 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:36:10.409880  670257 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:90 OomKillDisable:false NGoroutines:89 SystemTime:2025-12-07 23:36:10.396907459 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:36:10.410789  670257 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-313006 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1207 23:36:10.416864  670257 out.go:179] * Pausing node no-preload-313006 ... 
	I1207 23:36:10.418403  670257 host.go:66] Checking if "no-preload-313006" exists ...
	I1207 23:36:10.419146  670257 ssh_runner.go:195] Run: systemctl --version
	I1207 23:36:10.419253  670257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-313006
	I1207 23:36:10.444561  670257 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/no-preload-313006/id_rsa Username:docker}
	I1207 23:36:10.552526  670257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:36:10.570601  670257 pause.go:52] kubelet running: true
	I1207 23:36:10.570706  670257 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1207 23:36:10.764714  670257 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1207 23:36:10.764852  670257 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1207 23:36:10.851127  670257 cri.go:89] found id: "9d70771c342e0e6a8b340491d36ea107bf8abe93159eff71b6b33c5a89df58be"
	I1207 23:36:10.851153  670257 cri.go:89] found id: "63e35ea9afaaed7ad438f881cbcaf3b5813164e93a7f04bed7176c35907cb4c0"
	I1207 23:36:10.851159  670257 cri.go:89] found id: "393f33ab322dbe6524e1390a9b4b3524caaee37f8fd3322f5fa42afcba2d88c8"
	I1207 23:36:10.851163  670257 cri.go:89] found id: "2c733f7f60399147a390c6e21cbb293e3dd549fd6dc613363b85209ca503d959"
	I1207 23:36:10.851167  670257 cri.go:89] found id: "875984b7632065686e5488eaa175d1e9bc6f11d4ab18328ac4d3c2df479df442"
	I1207 23:36:10.851188  670257 cri.go:89] found id: "7a318b0832368150c50b8e6bcc0b249c6c0f5e0835f526a9036a3f9d6818cc85"
	I1207 23:36:10.851197  670257 cri.go:89] found id: "404e1d5beb2da9d3cc45722c51fc2e1c7b0c587a72d76030ae16a0117eb8350a"
	I1207 23:36:10.851206  670257 cri.go:89] found id: "087d0f5345ac825bcf193ab138e126157b165b5aa86f1b652afd90640d7fda6e"
	I1207 23:36:10.851217  670257 cri.go:89] found id: "1902052b7fa9a51b713591332e8f8f19d13383667710cc98390abfe859d91e2c"
	I1207 23:36:10.851234  670257 cri.go:89] found id: "956668bdbf8d201d97440dac258e060ce7444a7f759273e89cb0b00bce91fbe0"
	I1207 23:36:10.851243  670257 cri.go:89] found id: "8a4e2c23a171e4e01d7e5be0846972a8e83d5db6e5feebf9d7658400cf5cf62e"
	I1207 23:36:10.851248  670257 cri.go:89] found id: ""
	I1207 23:36:10.851303  670257 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 23:36:10.864225  670257 retry.go:31] will retry after 133.056769ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:36:10Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:36:10.997621  670257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:36:11.011422  670257 pause.go:52] kubelet running: false
	I1207 23:36:11.011491  670257 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1207 23:36:11.237515  670257 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1207 23:36:11.237638  670257 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1207 23:36:11.344288  670257 cri.go:89] found id: "9d70771c342e0e6a8b340491d36ea107bf8abe93159eff71b6b33c5a89df58be"
	I1207 23:36:11.344316  670257 cri.go:89] found id: "63e35ea9afaaed7ad438f881cbcaf3b5813164e93a7f04bed7176c35907cb4c0"
	I1207 23:36:11.344334  670257 cri.go:89] found id: "393f33ab322dbe6524e1390a9b4b3524caaee37f8fd3322f5fa42afcba2d88c8"
	I1207 23:36:11.344363  670257 cri.go:89] found id: "2c733f7f60399147a390c6e21cbb293e3dd549fd6dc613363b85209ca503d959"
	I1207 23:36:11.344369  670257 cri.go:89] found id: "875984b7632065686e5488eaa175d1e9bc6f11d4ab18328ac4d3c2df479df442"
	I1207 23:36:11.344374  670257 cri.go:89] found id: "7a318b0832368150c50b8e6bcc0b249c6c0f5e0835f526a9036a3f9d6818cc85"
	I1207 23:36:11.344379  670257 cri.go:89] found id: "404e1d5beb2da9d3cc45722c51fc2e1c7b0c587a72d76030ae16a0117eb8350a"
	I1207 23:36:11.344382  670257 cri.go:89] found id: "087d0f5345ac825bcf193ab138e126157b165b5aa86f1b652afd90640d7fda6e"
	I1207 23:36:11.344387  670257 cri.go:89] found id: "1902052b7fa9a51b713591332e8f8f19d13383667710cc98390abfe859d91e2c"
	I1207 23:36:11.344396  670257 cri.go:89] found id: "956668bdbf8d201d97440dac258e060ce7444a7f759273e89cb0b00bce91fbe0"
	I1207 23:36:11.344409  670257 cri.go:89] found id: "8a4e2c23a171e4e01d7e5be0846972a8e83d5db6e5feebf9d7658400cf5cf62e"
	I1207 23:36:11.344414  670257 cri.go:89] found id: ""
	I1207 23:36:11.344474  670257 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 23:36:11.360818  670257 retry.go:31] will retry after 483.539847ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:36:11Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:36:11.845378  670257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:36:11.858949  670257 pause.go:52] kubelet running: false
	I1207 23:36:11.859005  670257 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1207 23:36:12.001969  670257 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1207 23:36:12.002062  670257 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1207 23:36:12.071056  670257 cri.go:89] found id: "9d70771c342e0e6a8b340491d36ea107bf8abe93159eff71b6b33c5a89df58be"
	I1207 23:36:12.071083  670257 cri.go:89] found id: "63e35ea9afaaed7ad438f881cbcaf3b5813164e93a7f04bed7176c35907cb4c0"
	I1207 23:36:12.071089  670257 cri.go:89] found id: "393f33ab322dbe6524e1390a9b4b3524caaee37f8fd3322f5fa42afcba2d88c8"
	I1207 23:36:12.071094  670257 cri.go:89] found id: "2c733f7f60399147a390c6e21cbb293e3dd549fd6dc613363b85209ca503d959"
	I1207 23:36:12.071098  670257 cri.go:89] found id: "875984b7632065686e5488eaa175d1e9bc6f11d4ab18328ac4d3c2df479df442"
	I1207 23:36:12.071103  670257 cri.go:89] found id: "7a318b0832368150c50b8e6bcc0b249c6c0f5e0835f526a9036a3f9d6818cc85"
	I1207 23:36:12.071134  670257 cri.go:89] found id: "404e1d5beb2da9d3cc45722c51fc2e1c7b0c587a72d76030ae16a0117eb8350a"
	I1207 23:36:12.071138  670257 cri.go:89] found id: "087d0f5345ac825bcf193ab138e126157b165b5aa86f1b652afd90640d7fda6e"
	I1207 23:36:12.071143  670257 cri.go:89] found id: "1902052b7fa9a51b713591332e8f8f19d13383667710cc98390abfe859d91e2c"
	I1207 23:36:12.071153  670257 cri.go:89] found id: "956668bdbf8d201d97440dac258e060ce7444a7f759273e89cb0b00bce91fbe0"
	I1207 23:36:12.071164  670257 cri.go:89] found id: "8a4e2c23a171e4e01d7e5be0846972a8e83d5db6e5feebf9d7658400cf5cf62e"
	I1207 23:36:12.071168  670257 cri.go:89] found id: ""
	I1207 23:36:12.071212  670257 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 23:36:12.085403  670257 out.go:203] 
	W1207 23:36:12.086880  670257 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:36:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:36:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1207 23:36:12.086906  670257 out.go:285] * 
	* 
	W1207 23:36:12.091817  670257 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 23:36:12.093306  670257 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-313006 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-313006
helpers_test.go:243: (dbg) docker inspect no-preload-313006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f2f71b478561f7677a512d83b239743d3a12195edf06004fa5e71d67fe6faa28",
	        "Created": "2025-12-07T23:33:56.743918699Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 656576,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:35:12.209081803Z",
	            "FinishedAt": "2025-12-07T23:35:10.472530731Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/f2f71b478561f7677a512d83b239743d3a12195edf06004fa5e71d67fe6faa28/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f2f71b478561f7677a512d83b239743d3a12195edf06004fa5e71d67fe6faa28/hostname",
	        "HostsPath": "/var/lib/docker/containers/f2f71b478561f7677a512d83b239743d3a12195edf06004fa5e71d67fe6faa28/hosts",
	        "LogPath": "/var/lib/docker/containers/f2f71b478561f7677a512d83b239743d3a12195edf06004fa5e71d67fe6faa28/f2f71b478561f7677a512d83b239743d3a12195edf06004fa5e71d67fe6faa28-json.log",
	        "Name": "/no-preload-313006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-313006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-313006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f2f71b478561f7677a512d83b239743d3a12195edf06004fa5e71d67fe6faa28",
	                "LowerDir": "/var/lib/docker/overlay2/3127bde15e4dc2f4657d8e4018b5da1f90b377ad2f68b2bb2e943541b2587371-init/diff:/var/lib/docker/overlay2/d2e9c5481c0f5ed3745e4b3c85b207e8e3f273f5a1d285f7bc7bfa20976ad16e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3127bde15e4dc2f4657d8e4018b5da1f90b377ad2f68b2bb2e943541b2587371/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3127bde15e4dc2f4657d8e4018b5da1f90b377ad2f68b2bb2e943541b2587371/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3127bde15e4dc2f4657d8e4018b5da1f90b377ad2f68b2bb2e943541b2587371/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-313006",
	                "Source": "/var/lib/docker/volumes/no-preload-313006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-313006",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-313006",
	                "name.minikube.sigs.k8s.io": "no-preload-313006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e03649c3ddb92a2a229325c642c3325d1bb9416a5abb1aad0119efbdce0c62e5",
	            "SandboxKey": "/var/run/docker/netns/e03649c3ddb9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-313006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "357321d5a31d4d37dba08f8b7360dac5f2baa6c86fc4940023c2b5c75f1a37a8",
	                    "EndpointID": "c8946e407556a5aef14e5f12a07b118cd3df0fa82f16dd9cd55bdb622caa6205",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "ba:13:94:ff:bc:79",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-313006",
	                        "f2f71b478561"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-313006 -n no-preload-313006
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-313006 -n no-preload-313006: exit status 2 (398.472052ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-313006 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-313006 logs -n 25: (1.282141614s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p no-preload-313006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:34 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-320477 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │                     │
	│ stop    │ -p old-k8s-version-320477 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:34 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-320477 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:34 UTC │
	│ start   │ -p old-k8s-version-320477 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:35 UTC │
	│ delete  │ -p stopped-upgrade-604160                                                                                                                                                                                                                            │ stopped-upgrade-604160       │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:34 UTC │
	│ start   │ -p embed-certs-654118 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-313006 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │                     │
	│ stop    │ -p no-preload-313006 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:35 UTC │
	│ addons  │ enable dashboard -p no-preload-313006 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p no-preload-313006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ image   │ old-k8s-version-320477 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ pause   │ -p old-k8s-version-320477 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ delete  │ -p old-k8s-version-320477                                                                                                                                                                                                                            │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p kubernetes-upgrade-703538 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-703538    │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ start   │ -p kubernetes-upgrade-703538 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-703538    │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ delete  │ -p old-k8s-version-320477                                                                                                                                                                                                                            │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ delete  │ -p disable-driver-mounts-837628                                                                                                                                                                                                                      │ disable-driver-mounts-837628 │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p default-k8s-diff-port-312944 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-312944 │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-703538                                                                                                                                                                                                                         │ kubernetes-upgrade-703538    │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p newest-cni-858719 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-654118 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ stop    │ -p embed-certs-654118 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	│ image   │ no-preload-313006 image list --format=json                                                                                                                                                                                                           │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ pause   │ -p no-preload-313006 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:35:57
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:35:57.632024  665837 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:35:57.632148  665837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:35:57.632156  665837 out.go:374] Setting ErrFile to fd 2...
	I1207 23:35:57.632161  665837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:35:57.632377  665837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:35:57.632861  665837 out.go:368] Setting JSON to false
	I1207 23:35:57.634012  665837 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8302,"bootTime":1765142256,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:35:57.634076  665837 start.go:143] virtualization: kvm guest
	I1207 23:35:57.636222  665837 out.go:179] * [newest-cni-858719] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:35:57.637537  665837 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:35:57.637601  665837 notify.go:221] Checking for updates...
	I1207 23:35:57.639940  665837 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:35:57.641367  665837 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:35:57.642948  665837 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:35:57.644532  665837 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:35:57.646052  665837 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:35:57.648187  665837 config.go:182] Loaded profile config "default-k8s-diff-port-312944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:35:57.648313  665837 config.go:182] Loaded profile config "embed-certs-654118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:35:57.648457  665837 config.go:182] Loaded profile config "no-preload-313006": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:35:57.648599  665837 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:35:57.673911  665837 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:35:57.674024  665837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:35:57.734477  665837 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-07 23:35:57.723366575 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:35:57.734675  665837 docker.go:319] overlay module found
	I1207 23:35:57.736420  665837 out.go:179] * Using the docker driver based on user configuration
	I1207 23:35:57.737776  665837 start.go:309] selected driver: docker
	I1207 23:35:57.737790  665837 start.go:927] validating driver "docker" against <nil>
	I1207 23:35:57.737804  665837 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:35:57.738404  665837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:35:57.800588  665837 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-07 23:35:57.790071121 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:35:57.800741  665837 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1207 23:35:57.800777  665837 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1207 23:35:57.801009  665837 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1207 23:35:57.803264  665837 out.go:179] * Using Docker driver with root privileges
	I1207 23:35:57.804574  665837 cni.go:84] Creating CNI manager for ""
	I1207 23:35:57.804645  665837 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:35:57.804658  665837 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1207 23:35:57.804753  665837 start.go:353] cluster config:
	{Name:newest-cni-858719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:35:57.806078  665837 out.go:179] * Starting "newest-cni-858719" primary control-plane node in "newest-cni-858719" cluster
	I1207 23:35:57.807183  665837 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:35:57.808220  665837 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:35:57.809361  665837 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1207 23:35:57.809405  665837 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1207 23:35:57.809414  665837 cache.go:65] Caching tarball of preloaded images
	I1207 23:35:57.809487  665837 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:35:57.809505  665837 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:35:57.809517  665837 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1207 23:35:57.809622  665837 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/config.json ...
	I1207 23:35:57.809642  665837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/config.json: {Name:mk58abd3aba696b237e078949efd134e91598be6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:35:57.833660  665837 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:35:57.833685  665837 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:35:57.833709  665837 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:35:57.833753  665837 start.go:360] acquireMachinesLock for newest-cni-858719: {Name:mk3f9783a06cd72eff911e9615fc59e854b06695 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:35:57.834702  665837 start.go:364] duration metric: took 917.515µs to acquireMachinesLock for "newest-cni-858719"
	I1207 23:35:57.834748  665837 start.go:93] Provisioning new machine with config: &{Name:newest-cni-858719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:35:57.834842  665837 start.go:125] createHost starting for "" (driver="docker")
	I1207 23:35:57.086490  656318 pod_ready.go:94] pod "kube-controller-manager-no-preload-313006" is "Ready"
	I1207 23:35:57.086526  656318 pod_ready.go:86] duration metric: took 184.52001ms for pod "kube-controller-manager-no-preload-313006" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:57.285885  656318 pod_ready.go:83] waiting for pod "kube-proxy-xw4pf" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:57.686019  656318 pod_ready.go:94] pod "kube-proxy-xw4pf" is "Ready"
	I1207 23:35:57.686045  656318 pod_ready.go:86] duration metric: took 400.132494ms for pod "kube-proxy-xw4pf" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:57.886678  656318 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-313006" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:58.286146  656318 pod_ready.go:94] pod "kube-scheduler-no-preload-313006" is "Ready"
	I1207 23:35:58.286179  656318 pod_ready.go:86] duration metric: took 399.470825ms for pod "kube-scheduler-no-preload-313006" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:58.286194  656318 pod_ready.go:40] duration metric: took 36.408049997s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:35:58.340973  656318 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1207 23:35:58.342970  656318 out.go:179] * Done! kubectl is now configured to use "no-preload-313006" cluster and "default" namespace by default
	I1207 23:35:53.513092  663227 cli_runner.go:164] Run: docker exec default-k8s-diff-port-312944 stat /var/lib/dpkg/alternatives/iptables
	I1207 23:35:53.569664  663227 oci.go:144] the created container "default-k8s-diff-port-312944" has a running status.
	I1207 23:35:53.569699  663227 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa...
	I1207 23:35:53.616194  663227 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1207 23:35:53.654955  663227 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-312944 --format={{.State.Status}}
	I1207 23:35:53.677815  663227 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1207 23:35:53.677836  663227 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-312944 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1207 23:35:53.734256  663227 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-312944 --format={{.State.Status}}
	I1207 23:35:53.756551  663227 machine.go:94] provisionDockerMachine start ...
	I1207 23:35:53.756699  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:53.777538  663227 main.go:143] libmachine: Using SSH client type: native
	I1207 23:35:53.777885  663227 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1207 23:35:53.777903  663227 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:35:53.778498  663227 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33252->127.0.0.1:33453: read: connection reset by peer
	I1207 23:35:56.912375  663227 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-312944
	
	I1207 23:35:56.912407  663227 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-312944"
	I1207 23:35:56.912481  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:56.933722  663227 main.go:143] libmachine: Using SSH client type: native
	I1207 23:35:56.933966  663227 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1207 23:35:56.933978  663227 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-312944 && echo "default-k8s-diff-port-312944" | sudo tee /etc/hostname
	I1207 23:35:57.089581  663227 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-312944
	
	I1207 23:35:57.089671  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:57.108882  663227 main.go:143] libmachine: Using SSH client type: native
	I1207 23:35:57.109181  663227 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1207 23:35:57.109209  663227 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-312944' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-312944/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-312944' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:35:57.239382  663227 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:35:57.239416  663227 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:35:57.239450  663227 ubuntu.go:190] setting up certificates
	I1207 23:35:57.239464  663227 provision.go:84] configureAuth start
	I1207 23:35:57.239537  663227 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-312944
	I1207 23:35:57.259204  663227 provision.go:143] copyHostCerts
	I1207 23:35:57.259266  663227 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:35:57.259275  663227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:35:57.259370  663227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:35:57.259494  663227 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:35:57.259504  663227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:35:57.259547  663227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:35:57.259610  663227 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:35:57.259617  663227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:35:57.259644  663227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:35:57.259709  663227 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-312944 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-312944 localhost minikube]
	I1207 23:35:57.380006  663227 provision.go:177] copyRemoteCerts
	I1207 23:35:57.380201  663227 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:35:57.380362  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:57.400600  663227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:35:57.514388  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:35:57.542751  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1207 23:35:57.561877  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 23:35:57.580928  663227 provision.go:87] duration metric: took 341.449385ms to configureAuth
	I1207 23:35:57.580959  663227 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:35:57.581113  663227 config.go:182] Loaded profile config "default-k8s-diff-port-312944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:35:57.581208  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:57.601315  663227 main.go:143] libmachine: Using SSH client type: native
	I1207 23:35:57.601571  663227 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1207 23:35:57.601587  663227 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:35:57.900137  663227 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:35:57.900168  663227 machine.go:97] duration metric: took 4.143590275s to provisionDockerMachine
	I1207 23:35:57.900181  663227 client.go:176] duration metric: took 9.197426744s to LocalClient.Create
	I1207 23:35:57.900203  663227 start.go:167] duration metric: took 9.197496265s to libmachine.API.Create "default-k8s-diff-port-312944"
	I1207 23:35:57.900219  663227 start.go:293] postStartSetup for "default-k8s-diff-port-312944" (driver="docker")
	I1207 23:35:57.900236  663227 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:35:57.900318  663227 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:35:57.900402  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:57.923155  663227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:35:58.027598  663227 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:35:58.031781  663227 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:35:58.031812  663227 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:35:58.031825  663227 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:35:58.031877  663227 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:35:58.031973  663227 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:35:58.032092  663227 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:35:58.040270  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:35:58.063465  663227 start.go:296] duration metric: took 163.229604ms for postStartSetup
	I1207 23:35:58.063866  663227 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-312944
	I1207 23:35:58.087920  663227 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/config.json ...
	I1207 23:35:58.088259  663227 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:35:58.088304  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:58.109143  663227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:35:58.205010  663227 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:35:58.210080  663227 start.go:128] duration metric: took 9.50986297s to createHost
	I1207 23:35:58.210108  663227 start.go:83] releasing machines lock for "default-k8s-diff-port-312944", held for 9.51001628s
	I1207 23:35:58.210186  663227 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-312944
	I1207 23:35:58.230428  663227 ssh_runner.go:195] Run: cat /version.json
	I1207 23:35:58.230495  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:58.230505  663227 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:35:58.230600  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:58.251090  663227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:35:58.251094  663227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:35:58.351196  663227 ssh_runner.go:195] Run: systemctl --version
	I1207 23:35:58.444301  663227 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:35:58.490592  663227 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:35:58.497308  663227 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:35:58.497393  663227 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:35:58.529982  663227 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 23:35:58.530012  663227 start.go:496] detecting cgroup driver to use...
	I1207 23:35:58.530050  663227 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:35:58.530103  663227 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:35:58.550131  663227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:35:58.565080  663227 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:35:58.565145  663227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:35:58.585947  663227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:35:58.611485  663227 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:35:58.719375  663227 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:35:58.824231  663227 docker.go:234] disabling docker service ...
	I1207 23:35:58.824292  663227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:35:58.846221  663227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:35:58.861203  663227 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:35:58.952053  663227 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:35:59.048221  663227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:35:59.063271  663227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:35:59.078250  663227 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:35:59.078307  663227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:59.091512  663227 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:35:59.091585  663227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:59.102342  663227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:59.112767  663227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:59.122283  663227 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:35:59.132267  663227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:59.141985  663227 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:59.158203  663227 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:59.169026  663227 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:35:59.178463  663227 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:35:59.187956  663227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:35:59.283304  663227 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:36:00.900658  663227 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.617290285s)
	I1207 23:36:00.900698  663227 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:36:00.900755  663227 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:36:00.906075  663227 start.go:564] Will wait 60s for crictl version
	I1207 23:36:00.906135  663227 ssh_runner.go:195] Run: which crictl
	I1207 23:36:00.910783  663227 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:36:00.941450  663227 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:36:00.941556  663227 ssh_runner.go:195] Run: crio --version
	I1207 23:36:00.974038  663227 ssh_runner.go:195] Run: crio --version
	I1207 23:36:01.009347  663227 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1207 23:35:57.837385  665837 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1207 23:35:57.837699  665837 start.go:159] libmachine.API.Create for "newest-cni-858719" (driver="docker")
	I1207 23:35:57.837745  665837 client.go:173] LocalClient.Create starting
	I1207 23:35:57.837823  665837 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem
	I1207 23:35:57.837864  665837 main.go:143] libmachine: Decoding PEM data...
	I1207 23:35:57.837889  665837 main.go:143] libmachine: Parsing certificate...
	I1207 23:35:57.837978  665837 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem
	I1207 23:35:57.838028  665837 main.go:143] libmachine: Decoding PEM data...
	I1207 23:35:57.838047  665837 main.go:143] libmachine: Parsing certificate...
	I1207 23:35:57.838510  665837 cli_runner.go:164] Run: docker network inspect newest-cni-858719 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1207 23:35:57.858561  665837 cli_runner.go:211] docker network inspect newest-cni-858719 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1207 23:35:57.858664  665837 network_create.go:284] running [docker network inspect newest-cni-858719] to gather additional debugging logs...
	I1207 23:35:57.858692  665837 cli_runner.go:164] Run: docker network inspect newest-cni-858719
	W1207 23:35:57.876519  665837 cli_runner.go:211] docker network inspect newest-cni-858719 returned with exit code 1
	I1207 23:35:57.876574  665837 network_create.go:287] error running [docker network inspect newest-cni-858719]: docker network inspect newest-cni-858719: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-858719 not found
	I1207 23:35:57.876605  665837 network_create.go:289] output of [docker network inspect newest-cni-858719]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-858719 not found
	
	** /stderr **
	I1207 23:35:57.876734  665837 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:35:57.897599  665837 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-918c8f4f6e86 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:f0:02:fe:94:4b} reservation:<nil>}
	I1207 23:35:57.898673  665837 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ce07fb07c16c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:d2:35:46:a2:0a} reservation:<nil>}
	I1207 23:35:57.899174  665837 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f198eadca31e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:79:39:d6:10:dc} reservation:<nil>}
	I1207 23:35:57.900387  665837 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e24c30}
	I1207 23:35:57.900433  665837 network_create.go:124] attempt to create docker network newest-cni-858719 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1207 23:35:57.900511  665837 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-858719 newest-cni-858719
	I1207 23:35:57.954885  665837 network_create.go:108] docker network newest-cni-858719 192.168.76.0/24 created
	I1207 23:35:57.954925  665837 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-858719" container
	I1207 23:35:57.955000  665837 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1207 23:35:57.974658  665837 cli_runner.go:164] Run: docker volume create newest-cni-858719 --label name.minikube.sigs.k8s.io=newest-cni-858719 --label created_by.minikube.sigs.k8s.io=true
	I1207 23:35:57.993281  665837 oci.go:103] Successfully created a docker volume newest-cni-858719
	I1207 23:35:57.993386  665837 cli_runner.go:164] Run: docker run --rm --name newest-cni-858719-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-858719 --entrypoint /usr/bin/test -v newest-cni-858719:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1207 23:35:58.442831  665837 oci.go:107] Successfully prepared a docker volume newest-cni-858719
	I1207 23:35:58.442922  665837 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1207 23:35:58.442941  665837 kic.go:194] Starting extracting preloaded images to volume ...
	I1207 23:35:58.443034  665837 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-858719:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1207 23:36:01.587032  665837 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-858719:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.143929303s)
	I1207 23:36:01.587075  665837 kic.go:203] duration metric: took 3.14412959s to extract preloaded images to volume ...
	W1207 23:36:01.587168  665837 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1207 23:36:01.587222  665837 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1207 23:36:01.587272  665837 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1207 23:36:01.651169  665837 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-858719 --name newest-cni-858719 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-858719 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-858719 --network newest-cni-858719 --ip 192.168.76.2 --volume newest-cni-858719:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1207 23:36:01.960842  665837 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Running}}
	I1207 23:36:01.981910  665837 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:02.003946  665837 cli_runner.go:164] Run: docker exec newest-cni-858719 stat /var/lib/dpkg/alternatives/iptables
	I1207 23:36:02.058757  665837 oci.go:144] the created container "newest-cni-858719" has a running status.
	I1207 23:36:02.058789  665837 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa...
	I1207 23:36:02.200970  665837 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1207 23:36:02.239696  665837 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:02.267119  665837 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1207 23:36:02.267154  665837 kic_runner.go:114] Args: [docker exec --privileged newest-cni-858719 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1207 23:36:02.338750  665837 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:02.363303  665837 machine.go:94] provisionDockerMachine start ...
	I1207 23:36:02.363460  665837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:02.387599  665837 main.go:143] libmachine: Using SSH client type: native
	I1207 23:36:02.387949  665837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1207 23:36:02.387977  665837 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:36:02.529963  665837 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-858719
	
	I1207 23:36:02.529990  665837 ubuntu.go:182] provisioning hostname "newest-cni-858719"
	I1207 23:36:02.530052  665837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:02.554476  665837 main.go:143] libmachine: Using SSH client type: native
	I1207 23:36:02.554931  665837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1207 23:36:02.554949  665837 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-858719 && echo "newest-cni-858719" | sudo tee /etc/hostname
	I1207 23:36:01.010822  663227 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-312944 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:36:01.034931  663227 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1207 23:36:01.040248  663227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:36:01.051249  663227 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-312944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-312944 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:36:01.051460  663227 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:36:01.051545  663227 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:36:01.088728  663227 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:36:01.088758  663227 crio.go:433] Images already preloaded, skipping extraction
	I1207 23:36:01.088815  663227 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:36:01.121286  663227 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:36:01.121309  663227 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:36:01.121318  663227 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.2 crio true true} ...
	I1207 23:36:01.121424  663227 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-312944 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-312944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:36:01.121491  663227 ssh_runner.go:195] Run: crio config
	I1207 23:36:01.178887  663227 cni.go:84] Creating CNI manager for ""
	I1207 23:36:01.178914  663227 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:36:01.178938  663227 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 23:36:01.179025  663227 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-312944 NodeName:default-k8s-diff-port-312944 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:36:01.179239  663227 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-312944"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:36:01.179532  663227 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:36:01.189813  663227 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:36:01.189950  663227 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 23:36:01.200736  663227 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1207 23:36:01.217998  663227 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:36:01.327130  663227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1207 23:36:01.341947  663227 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1207 23:36:01.346354  663227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:36:01.412358  663227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:36:01.521527  663227 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:36:01.549890  663227 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944 for IP: 192.168.94.2
	I1207 23:36:01.549911  663227 certs.go:195] generating shared ca certs ...
	I1207 23:36:01.549928  663227 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:01.550081  663227 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:36:01.550150  663227 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:36:01.550161  663227 certs.go:257] generating profile certs ...
	I1207 23:36:01.550242  663227 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/client.key
	I1207 23:36:01.550259  663227 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/client.crt with IP's: []
	I1207 23:36:01.675649  663227 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/client.crt ...
	I1207 23:36:01.675688  663227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/client.crt: {Name:mka8498bebd3154217aba57e65c364430d9492be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:01.675892  663227 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/client.key ...
	I1207 23:36:01.675918  663227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/client.key: {Name:mka99e968c159a29eb845d9bc469095c2d3c0f20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:01.676054  663227 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.key.025605fa
	I1207 23:36:01.676079  663227 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.crt.025605fa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1207 23:36:01.794967  663227 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.crt.025605fa ...
	I1207 23:36:01.795003  663227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.crt.025605fa: {Name:mkf035a40b177f7648e0882d28e7999ec530dbf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:01.795163  663227 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.key.025605fa ...
	I1207 23:36:01.795177  663227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.key.025605fa: {Name:mk7ffe413bfcab5db395a27d40c70ef2413838d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:01.795259  663227 certs.go:382] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.crt.025605fa -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.crt
	I1207 23:36:01.795352  663227 certs.go:386] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.key.025605fa -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.key
	I1207 23:36:01.795422  663227 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/proxy-client.key
	I1207 23:36:01.795441  663227 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/proxy-client.crt with IP's: []
	I1207 23:36:01.870314  663227 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/proxy-client.crt ...
	I1207 23:36:01.870360  663227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/proxy-client.crt: {Name:mka1b99d7a1d1582ba7af59029ce88fcd4691dde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:01.870557  663227 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/proxy-client.key ...
	I1207 23:36:01.870575  663227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/proxy-client.key: {Name:mkc0cd79e929210ed866cd63dbcbdd822785859c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:01.870788  663227 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:36:01.870841  663227 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:36:01.870848  663227 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:36:01.870870  663227 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:36:01.870894  663227 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:36:01.870917  663227 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:36:01.870973  663227 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:36:01.871802  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:36:01.892700  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:36:01.914757  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:36:01.935631  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:36:01.958032  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1207 23:36:01.978816  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 23:36:02.004227  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:36:02.029404  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 23:36:02.050868  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:36:02.076246  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:36:02.097049  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:36:02.119763  663227 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:36:02.136851  663227 ssh_runner.go:195] Run: openssl version
	I1207 23:36:02.143442  663227 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:02.151407  663227 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:36:02.159022  663227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:02.163159  663227 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:02.163227  663227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:02.214043  663227 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:36:02.226767  663227 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1207 23:36:02.240376  663227 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:36:02.250985  663227 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:36:02.263110  663227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:36:02.269140  663227 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:36:02.269303  663227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:36:02.346026  663227 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:36:02.358157  663227 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/393125.pem /etc/ssl/certs/51391683.0
	I1207 23:36:02.369394  663227 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:36:02.381286  663227 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:36:02.393663  663227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:36:02.399245  663227 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:36:02.399308  663227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:36:02.448268  663227 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:36:02.457756  663227 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3931252.pem /etc/ssl/certs/3ec20f2e.0
	I1207 23:36:02.467276  663227 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:36:02.471972  663227 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1207 23:36:02.472039  663227 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-312944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-312944 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:36:02.472125  663227 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:36:02.472163  663227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:36:02.504544  663227 cri.go:89] found id: ""
	I1207 23:36:02.504607  663227 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:36:02.514004  663227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 23:36:02.523079  663227 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1207 23:36:02.523139  663227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 23:36:02.532543  663227 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 23:36:02.532564  663227 kubeadm.go:158] found existing configuration files:
	
	I1207 23:36:02.532607  663227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1207 23:36:02.542265  663227 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1207 23:36:02.542386  663227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1207 23:36:02.551752  663227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1207 23:36:02.562486  663227 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1207 23:36:02.562546  663227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1207 23:36:02.572678  663227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1207 23:36:02.584922  663227 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1207 23:36:02.584992  663227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1207 23:36:02.593892  663227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1207 23:36:02.603546  663227 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1207 23:36:02.603597  663227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1207 23:36:02.613207  663227 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1207 23:36:02.658937  663227 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1207 23:36:02.659104  663227 kubeadm.go:319] [preflight] Running pre-flight checks
	I1207 23:36:02.683465  663227 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1207 23:36:02.683537  663227 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1207 23:36:02.683586  663227 kubeadm.go:319] OS: Linux
	I1207 23:36:02.683657  663227 kubeadm.go:319] CGROUPS_CPU: enabled
	I1207 23:36:02.683761  663227 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1207 23:36:02.683867  663227 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1207 23:36:02.683947  663227 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1207 23:36:02.684040  663227 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1207 23:36:02.684132  663227 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1207 23:36:02.684246  663227 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1207 23:36:02.684364  663227 kubeadm.go:319] CGROUPS_IO: enabled
	I1207 23:36:02.772158  663227 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 23:36:02.772311  663227 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 23:36:02.772457  663227 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1207 23:36:02.783844  663227 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 23:36:02.786437  663227 out.go:252]   - Generating certificates and keys ...
	I1207 23:36:02.786586  663227 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1207 23:36:02.786694  663227 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1207 23:36:02.705898  665837 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-858719
	
	I1207 23:36:02.706000  665837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:02.728491  665837 main.go:143] libmachine: Using SSH client type: native
	I1207 23:36:02.728775  665837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1207 23:36:02.728805  665837 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-858719' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-858719/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-858719' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:36:02.872237  665837 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:36:02.872277  665837 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:36:02.872305  665837 ubuntu.go:190] setting up certificates
	I1207 23:36:02.872341  665837 provision.go:84] configureAuth start
	I1207 23:36:02.872421  665837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-858719
	I1207 23:36:02.892316  665837 provision.go:143] copyHostCerts
	I1207 23:36:02.892393  665837 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:36:02.892402  665837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:36:02.892516  665837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:36:02.892646  665837 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:36:02.892664  665837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:36:02.892706  665837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:36:02.892798  665837 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:36:02.892810  665837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:36:02.892845  665837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:36:02.892914  665837 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.newest-cni-858719 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-858719]
	I1207 23:36:03.050587  665837 provision.go:177] copyRemoteCerts
	I1207 23:36:03.050669  665837 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:36:03.050721  665837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:03.070035  665837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:03.165012  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:36:03.185451  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1207 23:36:03.203690  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 23:36:03.222074  665837 provision.go:87] duration metric: took 349.709006ms to configureAuth
	I1207 23:36:03.222103  665837 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:36:03.222391  665837 config.go:182] Loaded profile config "newest-cni-858719": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:36:03.222524  665837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:03.241380  665837 main.go:143] libmachine: Using SSH client type: native
	I1207 23:36:03.241606  665837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1207 23:36:03.241621  665837 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:36:03.515191  665837 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:36:03.515218  665837 machine.go:97] duration metric: took 1.151891491s to provisionDockerMachine
	I1207 23:36:03.515227  665837 client.go:176] duration metric: took 5.677472406s to LocalClient.Create
	I1207 23:36:03.515243  665837 start.go:167] duration metric: took 5.677548642s to libmachine.API.Create "newest-cni-858719"
	I1207 23:36:03.515251  665837 start.go:293] postStartSetup for "newest-cni-858719" (driver="docker")
	I1207 23:36:03.515267  665837 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:36:03.515351  665837 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:36:03.515402  665837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:03.533709  665837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:03.630870  665837 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:36:03.635066  665837 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:36:03.635097  665837 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:36:03.635112  665837 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:36:03.635169  665837 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:36:03.635266  665837 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:36:03.635427  665837 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:36:03.643491  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:36:03.665613  665837 start.go:296] duration metric: took 150.343557ms for postStartSetup
	I1207 23:36:03.665988  665837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-858719
	I1207 23:36:03.685535  665837 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/config.json ...
	I1207 23:36:03.685798  665837 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:36:03.685840  665837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:03.704761  665837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:03.800788  665837 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:36:03.806954  665837 start.go:128] duration metric: took 5.972090169s to createHost
	I1207 23:36:03.806991  665837 start.go:83] releasing machines lock for "newest-cni-858719", held for 5.972264495s
	I1207 23:36:03.807085  665837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-858719
	I1207 23:36:03.828719  665837 ssh_runner.go:195] Run: cat /version.json
	I1207 23:36:03.828755  665837 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:36:03.828780  665837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:03.828863  665837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:03.851741  665837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:03.853088  665837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:04.013379  665837 ssh_runner.go:195] Run: systemctl --version
	I1207 23:36:04.020512  665837 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:36:04.062124  665837 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:36:04.068394  665837 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:36:04.068478  665837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:36:04.098889  665837 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 23:36:04.098918  665837 start.go:496] detecting cgroup driver to use...
	I1207 23:36:04.098952  665837 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:36:04.099002  665837 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:36:04.120631  665837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:36:04.134242  665837 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:36:04.134311  665837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:36:04.151593  665837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:36:04.173137  665837 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:36:04.268487  665837 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:36:04.357170  665837 docker.go:234] disabling docker service ...
	I1207 23:36:04.357234  665837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:36:04.376914  665837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:36:04.390376  665837 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:36:04.477949  665837 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:36:04.572394  665837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:36:04.585670  665837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:36:04.599761  665837 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:36:04.599839  665837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:04.610153  665837 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:36:04.610221  665837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:04.619204  665837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:04.628444  665837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:04.637136  665837 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:36:04.646036  665837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:04.655665  665837 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:04.670198  665837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:04.679196  665837 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:36:04.686802  665837 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:36:04.694408  665837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:36:04.776822  665837 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:36:04.915221  665837 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:36:04.915298  665837 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:36:04.919552  665837 start.go:564] Will wait 60s for crictl version
	I1207 23:36:04.919612  665837 ssh_runner.go:195] Run: which crictl
	I1207 23:36:04.923744  665837 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:36:04.949075  665837 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:36:04.949172  665837 ssh_runner.go:195] Run: crio --version
	I1207 23:36:04.978314  665837 ssh_runner.go:195] Run: crio --version
	I1207 23:36:05.012522  665837 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1207 23:36:05.013994  665837 cli_runner.go:164] Run: docker network inspect newest-cni-858719 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:36:05.032995  665837 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1207 23:36:05.037347  665837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:36:05.049520  665837 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1207 23:36:05.050792  665837 kubeadm.go:884] updating cluster {Name:newest-cni-858719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:36:05.050937  665837 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1207 23:36:05.051041  665837 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:36:05.083782  665837 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:36:05.083802  665837 crio.go:433] Images already preloaded, skipping extraction
	I1207 23:36:05.083847  665837 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:36:05.110679  665837 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:36:05.110702  665837 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:36:05.110710  665837 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1207 23:36:05.110797  665837 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-858719 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:36:05.110871  665837 ssh_runner.go:195] Run: crio config
	I1207 23:36:05.156683  665837 cni.go:84] Creating CNI manager for ""
	I1207 23:36:05.156713  665837 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:36:05.156737  665837 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1207 23:36:05.156771  665837 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-858719 NodeName:newest-cni-858719 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:36:05.156939  665837 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-858719"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:36:05.157019  665837 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1207 23:36:05.165120  665837 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:36:05.165195  665837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 23:36:05.172977  665837 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1207 23:36:05.185886  665837 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1207 23:36:05.201903  665837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1207 23:36:05.215261  665837 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1207 23:36:05.219158  665837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:36:05.229705  665837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:36:05.334863  665837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:36:05.359142  665837 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719 for IP: 192.168.76.2
	I1207 23:36:05.359170  665837 certs.go:195] generating shared ca certs ...
	I1207 23:36:05.359194  665837 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:05.359394  665837 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:36:05.359470  665837 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:36:05.359492  665837 certs.go:257] generating profile certs ...
	I1207 23:36:05.359577  665837 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/client.key
	I1207 23:36:05.359596  665837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/client.crt with IP's: []
	I1207 23:36:05.458374  665837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/client.crt ...
	I1207 23:36:05.458420  665837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/client.crt: {Name:mk6d221c986dcccee24e29be113ea69c348eb796 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:05.458674  665837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/client.key ...
	I1207 23:36:05.458700  665837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/client.key: {Name:mk707f84edcddb5e4839ac043dccc843e61f8210 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:05.458880  665837 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.key.81fe4363
	I1207 23:36:05.458905  665837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.crt.81fe4363 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1207 23:36:05.509982  665837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.crt.81fe4363 ...
	I1207 23:36:05.510016  665837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.crt.81fe4363: {Name:mk0054ada683cb260273268c7ce81d6ead662c0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:05.510198  665837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.key.81fe4363 ...
	I1207 23:36:05.510217  665837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.key.81fe4363: {Name:mk31ff682dcae9ca96dc237204769da60d62ee7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:05.510321  665837 certs.go:382] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.crt.81fe4363 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.crt
	I1207 23:36:05.510458  665837 certs.go:386] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.key.81fe4363 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.key
	I1207 23:36:05.510543  665837 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.key
	I1207 23:36:05.510566  665837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.crt with IP's: []
	I1207 23:36:05.640941  665837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.crt ...
	I1207 23:36:05.640971  665837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.crt: {Name:mkd739335a5874b7c0a770d58de470172e2dbedd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:05.641129  665837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.key ...
	I1207 23:36:05.641142  665837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.key: {Name:mke6ddf36874683d7952a65cdc8000dccae22770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:05.641319  665837 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:36:05.641387  665837 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:36:05.641396  665837 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:36:05.641421  665837 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:36:05.641447  665837 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:36:05.641470  665837 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:36:05.641517  665837 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:36:05.642150  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:36:05.661589  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:36:05.679772  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:36:05.698458  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:36:05.716258  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1207 23:36:05.733934  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 23:36:05.751905  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:36:05.770267  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 23:36:05.787797  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:36:05.807066  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:36:05.824497  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:36:05.842195  665837 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:36:05.856459  665837 ssh_runner.go:195] Run: openssl version
	I1207 23:36:05.862789  665837 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:36:05.870598  665837 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:36:05.878422  665837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:36:05.882313  665837 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:36:05.882379  665837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:36:05.917466  665837 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:36:05.925258  665837 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3931252.pem /etc/ssl/certs/3ec20f2e.0
	I1207 23:36:05.933502  665837 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:05.941365  665837 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:36:05.949194  665837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:05.952999  665837 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:05.953069  665837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:05.988689  665837 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:36:05.996881  665837 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1207 23:36:06.005175  665837 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:36:06.013509  665837 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:36:06.021477  665837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:36:06.025846  665837 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:36:06.025909  665837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:36:06.065650  665837 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:36:06.073822  665837 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/393125.pem /etc/ssl/certs/51391683.0
	I1207 23:36:06.081580  665837 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:36:06.085187  665837 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1207 23:36:06.085252  665837 kubeadm.go:401] StartCluster: {Name:newest-cni-858719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:36:06.085347  665837 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:36:06.085403  665837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:36:06.116569  665837 cri.go:89] found id: ""
	I1207 23:36:06.116652  665837 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:36:06.125546  665837 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 23:36:06.133602  665837 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1207 23:36:06.133654  665837 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 23:36:06.141466  665837 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 23:36:06.141488  665837 kubeadm.go:158] found existing configuration files:
	
	I1207 23:36:06.141546  665837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1207 23:36:06.149511  665837 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1207 23:36:06.149584  665837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1207 23:36:06.157230  665837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1207 23:36:06.165147  665837 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1207 23:36:06.165213  665837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1207 23:36:06.173153  665837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1207 23:36:06.181615  665837 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1207 23:36:06.181677  665837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1207 23:36:06.189582  665837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1207 23:36:06.197529  665837 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1207 23:36:06.197592  665837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1207 23:36:06.205204  665837 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1207 23:36:06.246058  665837 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1207 23:36:06.246147  665837 kubeadm.go:319] [preflight] Running pre-flight checks
	I1207 23:36:06.331379  665837 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1207 23:36:06.331493  665837 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1207 23:36:06.331530  665837 kubeadm.go:319] OS: Linux
	I1207 23:36:06.331600  665837 kubeadm.go:319] CGROUPS_CPU: enabled
	I1207 23:36:06.331651  665837 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1207 23:36:06.331710  665837 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1207 23:36:06.331818  665837 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1207 23:36:06.331916  665837 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1207 23:36:06.331981  665837 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1207 23:36:06.332048  665837 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1207 23:36:06.332128  665837 kubeadm.go:319] CGROUPS_IO: enabled
	I1207 23:36:06.391726  665837 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 23:36:06.391894  665837 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 23:36:06.392060  665837 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1207 23:36:06.400113  665837 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 23:36:06.402392  665837 out.go:252]   - Generating certificates and keys ...
	I1207 23:36:06.402495  665837 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1207 23:36:06.402610  665837 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1207 23:36:06.483657  665837 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 23:36:06.573106  665837 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1207 23:36:06.701431  665837 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1207 23:36:06.736106  665837 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1207 23:36:07.035667  665837 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1207 23:36:07.035880  665837 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-858719] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1207 23:36:07.083552  665837 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1207 23:36:07.083740  665837 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-858719] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1207 23:36:07.112003  665837 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 23:36:07.149906  665837 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 23:36:07.264022  665837 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1207 23:36:07.264234  665837 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 23:36:07.309844  665837 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 23:36:07.342914  665837 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 23:36:07.358133  665837 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 23:36:07.458609  665837 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 23:36:07.517290  665837 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 23:36:07.517849  665837 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 23:36:07.522893  665837 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 23:36:07.525987  665837 out.go:252]   - Booting up control plane ...
	I1207 23:36:07.526156  665837 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 23:36:07.526263  665837 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 23:36:07.526372  665837 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 23:36:07.540600  665837 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 23:36:07.540760  665837 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1207 23:36:07.549156  665837 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1207 23:36:07.549503  665837 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 23:36:07.549598  665837 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1207 23:36:03.875543  663227 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 23:36:04.187689  663227 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1207 23:36:04.382114  663227 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1207 23:36:04.713204  663227 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1207 23:36:05.332551  663227 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1207 23:36:05.332771  663227 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-312944 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1207 23:36:05.534039  663227 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1207 23:36:05.534236  663227 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-312944 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1207 23:36:06.312022  663227 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 23:36:06.545164  663227 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 23:36:06.798735  663227 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1207 23:36:06.799013  663227 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 23:36:07.135049  663227 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 23:36:07.298200  663227 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 23:36:07.371156  663227 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 23:36:07.646146  663227 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 23:36:08.354276  663227 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 23:36:08.354863  663227 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 23:36:08.361382  663227 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 23:36:08.363200  663227 out.go:252]   - Booting up control plane ...
	I1207 23:36:08.363380  663227 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 23:36:08.363524  663227 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 23:36:08.364215  663227 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 23:36:08.381319  663227 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 23:36:08.381493  663227 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1207 23:36:08.389287  663227 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1207 23:36:08.389544  663227 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 23:36:08.389619  663227 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1207 23:36:07.667091  665837 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1207 23:36:07.667277  665837 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1207 23:36:08.168140  665837 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.174754ms
	I1207 23:36:08.171235  665837 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1207 23:36:08.171381  665837 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1207 23:36:08.171543  665837 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1207 23:36:08.171624  665837 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1207 23:36:09.176277  665837 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004990684s
	I1207 23:36:10.185530  665837 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.014192221s
	I1207 23:36:12.173071  665837 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001762312s
	I1207 23:36:12.194610  665837 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 23:36:12.209368  665837 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 23:36:12.226741  665837 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 23:36:12.226977  665837 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-858719 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 23:36:12.239792  665837 kubeadm.go:319] [bootstrap-token] Using token: mq6mhg.hwg0yzc47jfu4zht
	I1207 23:36:12.241395  665837 out.go:252]   - Configuring RBAC rules ...
	I1207 23:36:12.241570  665837 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 23:36:12.247069  665837 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 23:36:12.258568  665837 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 23:36:12.261693  665837 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 23:36:12.264880  665837 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 23:36:12.268027  665837 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 23:36:12.580634  665837 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	
	
	==> CRI-O <==
	Dec 07 23:35:40 no-preload-313006 crio[572]: time="2025-12-07T23:35:40.514222251Z" level=info msg="Started container" PID=1768 containerID=d86d5bc68a03178e36fcf86a2aa8dfeec1d0615e47d1e74c30c06b4324fc3485 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h/dashboard-metrics-scraper id=0ae1d10b-7409-4eaa-9113-849e477d8893 name=/runtime.v1.RuntimeService/StartContainer sandboxID=311dc02799023ad26957723ef5e0353336394c2181a73c8a13fd1a721603fc89
	Dec 07 23:35:41 no-preload-313006 crio[572]: time="2025-12-07T23:35:41.55057371Z" level=info msg="Removing container: 47abf464763e71165bcdab4db1ebf65eb73a9e00bb6a4db90fb3163f12f3d1d5" id=9cc4e628-ffd8-405a-8c03-d4d0a4b02b38 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 07 23:35:41 no-preload-313006 crio[572]: time="2025-12-07T23:35:41.561829981Z" level=info msg="Removed container 47abf464763e71165bcdab4db1ebf65eb73a9e00bb6a4db90fb3163f12f3d1d5: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h/dashboard-metrics-scraper" id=9cc4e628-ffd8-405a-8c03-d4d0a4b02b38 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 07 23:35:51 no-preload-313006 crio[572]: time="2025-12-07T23:35:51.583579536Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=24e2c512-aee2-4b74-9f64-d1409730e3cc name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:35:51 no-preload-313006 crio[572]: time="2025-12-07T23:35:51.594959133Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=753dbf2c-09e2-4de1-9b38-90615e3173d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:35:51 no-preload-313006 crio[572]: time="2025-12-07T23:35:51.615897917Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=28347122-adc4-4fa9-87ec-59815e90b4b5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:35:51 no-preload-313006 crio[572]: time="2025-12-07T23:35:51.616064446Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:35:51 no-preload-313006 crio[572]: time="2025-12-07T23:35:51.651050159Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:35:51 no-preload-313006 crio[572]: time="2025-12-07T23:35:51.651258454Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/190c9a5093d18173d8622fb8634d4e121b451dd4336fd251e3a35f62a7599088/merged/etc/passwd: no such file or directory"
	Dec 07 23:35:51 no-preload-313006 crio[572]: time="2025-12-07T23:35:51.651292455Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/190c9a5093d18173d8622fb8634d4e121b451dd4336fd251e3a35f62a7599088/merged/etc/group: no such file or directory"
	Dec 07 23:35:51 no-preload-313006 crio[572]: time="2025-12-07T23:35:51.651625946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:35:51 no-preload-313006 crio[572]: time="2025-12-07T23:35:51.887827824Z" level=info msg="Created container 9d70771c342e0e6a8b340491d36ea107bf8abe93159eff71b6b33c5a89df58be: kube-system/storage-provisioner/storage-provisioner" id=28347122-adc4-4fa9-87ec-59815e90b4b5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:35:51 no-preload-313006 crio[572]: time="2025-12-07T23:35:51.889504117Z" level=info msg="Starting container: 9d70771c342e0e6a8b340491d36ea107bf8abe93159eff71b6b33c5a89df58be" id=9dfd54a3-a986-46d4-aca6-b8a774964676 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:35:51 no-preload-313006 crio[572]: time="2025-12-07T23:35:51.891895021Z" level=info msg="Started container" PID=1782 containerID=9d70771c342e0e6a8b340491d36ea107bf8abe93159eff71b6b33c5a89df58be description=kube-system/storage-provisioner/storage-provisioner id=9dfd54a3-a986-46d4-aca6-b8a774964676 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b5531e6a3e1a7af31b709666ec1989cdc6b00c6e736f884036cb80df0a77319
	Dec 07 23:36:05 no-preload-313006 crio[572]: time="2025-12-07T23:36:05.458246943Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ffb81999-ca9a-4501-a776-6edd1612a6e1 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:36:05 no-preload-313006 crio[572]: time="2025-12-07T23:36:05.459379346Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=02e66dba-019c-4d33-8cfe-c4c0cc5c484c name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:36:05 no-preload-313006 crio[572]: time="2025-12-07T23:36:05.460419568Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h/dashboard-metrics-scraper" id=0049aa38-ab80-477d-a83d-ee6497c098ab name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:36:05 no-preload-313006 crio[572]: time="2025-12-07T23:36:05.460577119Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:05 no-preload-313006 crio[572]: time="2025-12-07T23:36:05.467131248Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:05 no-preload-313006 crio[572]: time="2025-12-07T23:36:05.467770551Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:05 no-preload-313006 crio[572]: time="2025-12-07T23:36:05.492942649Z" level=info msg="Created container 956668bdbf8d201d97440dac258e060ce7444a7f759273e89cb0b00bce91fbe0: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h/dashboard-metrics-scraper" id=0049aa38-ab80-477d-a83d-ee6497c098ab name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:36:05 no-preload-313006 crio[572]: time="2025-12-07T23:36:05.493647342Z" level=info msg="Starting container: 956668bdbf8d201d97440dac258e060ce7444a7f759273e89cb0b00bce91fbe0" id=619a6d42-33f8-4903-9c2c-2aeebaa7829b name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:36:05 no-preload-313006 crio[572]: time="2025-12-07T23:36:05.495553707Z" level=info msg="Started container" PID=1818 containerID=956668bdbf8d201d97440dac258e060ce7444a7f759273e89cb0b00bce91fbe0 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h/dashboard-metrics-scraper id=619a6d42-33f8-4903-9c2c-2aeebaa7829b name=/runtime.v1.RuntimeService/StartContainer sandboxID=311dc02799023ad26957723ef5e0353336394c2181a73c8a13fd1a721603fc89
	Dec 07 23:36:05 no-preload-313006 crio[572]: time="2025-12-07T23:36:05.626197068Z" level=info msg="Removing container: d86d5bc68a03178e36fcf86a2aa8dfeec1d0615e47d1e74c30c06b4324fc3485" id=05472ab6-4248-4715-87ca-e4bf9660b2ed name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 07 23:36:05 no-preload-313006 crio[572]: time="2025-12-07T23:36:05.636917966Z" level=info msg="Removed container d86d5bc68a03178e36fcf86a2aa8dfeec1d0615e47d1e74c30c06b4324fc3485: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h/dashboard-metrics-scraper" id=05472ab6-4248-4715-87ca-e4bf9660b2ed name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	956668bdbf8d2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   311dc02799023       dashboard-metrics-scraper-867fb5f87b-7w27h   kubernetes-dashboard
	9d70771c342e0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   3b5531e6a3e1a       storage-provisioner                          kube-system
	8a4e2c23a171e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   5ad6776b7ef68       kubernetes-dashboard-b84665fb8-zvhhr         kubernetes-dashboard
	915a05bbae2c6       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   dfaacba9243d4       busybox                                      default
	63e35ea9afaae       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           52 seconds ago      Running             coredns                     0                   f228db3be2520       coredns-7d764666f9-btjrp                     kube-system
	393f33ab322db       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           52 seconds ago      Running             kube-proxy                  0                   08915dbfad33d       kube-proxy-xw4pf                             kube-system
	2c733f7f60399       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   3b5531e6a3e1a       storage-provisioner                          kube-system
	875984b763206       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   4bfdfdc332385       kindnet-nzf5r                                kube-system
	7a318b0832368       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           54 seconds ago      Running             etcd                        0                   17da0d6c592de       etcd-no-preload-313006                       kube-system
	404e1d5beb2da       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           54 seconds ago      Running             kube-controller-manager     0                   5199d5b5b27ac       kube-controller-manager-no-preload-313006    kube-system
	087d0f5345ac8       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           54 seconds ago      Running             kube-apiserver              0                   958dccc6a52f9       kube-apiserver-no-preload-313006             kube-system
	1902052b7fa9a       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           54 seconds ago      Running             kube-scheduler              0                   90bbf1eef33f8       kube-scheduler-no-preload-313006             kube-system
	
	
	==> coredns [63e35ea9afaaed7ad438f881cbcaf3b5813164e93a7f04bed7176c35907cb4c0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:56789 - 3926 "HINFO IN 1702562694029715222.3097478757243340104. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030089708s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-313006
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-313006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=no-preload-313006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_34_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:34:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-313006
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:36:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:35:51 +0000   Sun, 07 Dec 2025 23:34:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:35:51 +0000   Sun, 07 Dec 2025 23:34:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:35:51 +0000   Sun, 07 Dec 2025 23:34:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:35:51 +0000   Sun, 07 Dec 2025 23:34:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-313006
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                1b1493a2-5c01-4861-a1e5-15f85715a778
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-7d764666f9-btjrp                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-no-preload-313006                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-nzf5r                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-no-preload-313006              250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-no-preload-313006     200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-xw4pf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-no-preload-313006              100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-7w27h    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-zvhhr          0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  109s  node-controller  Node no-preload-313006 event: Registered Node no-preload-313006 in Controller
	  Normal  RegisteredNode  50s   node-controller  Node no-preload-313006 event: Registered Node no-preload-313006 in Controller
	
	
	==> dmesg <==
	[  +0.006319] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.495443] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006323] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494714] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006745] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494455] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007157] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493953] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007413] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493695] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007143] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493798] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007702] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493076] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008458] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493060] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008891] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492811] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007996] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493243] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008588] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492559] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008931] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.491699] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.010378] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	
	
	==> etcd [7a318b0832368150c50b8e6bcc0b249c6c0f5e0835f526a9036a3f9d6818cc85] <==
	{"level":"warn","ts":"2025-12-07T23:35:19.685960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.692548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.698592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.705340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.712615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.719863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.726201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.732512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.738758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.746865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.755908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.763693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.770914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.777400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.783958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.803895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.810115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.817149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.823405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.867172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50816","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T23:35:51.321892Z","caller":"traceutil/trace.go:172","msg":"trace[2117102789] transaction","detail":"{read_only:false; response_revision:663; number_of_response:1; }","duration":"162.342371ms","start":"2025-12-07T23:35:51.159533Z","end":"2025-12-07T23:35:51.321875Z","steps":["trace[2117102789] 'process raft request'  (duration: 162.22862ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-07T23:35:51.718124Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.511883ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" limit:1 ","response":"range_response_count:1 size:420"}
	{"level":"info","ts":"2025-12-07T23:35:51.718372Z","caller":"traceutil/trace.go:172","msg":"trace[1813324635] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:665; }","duration":"100.774283ms","start":"2025-12-07T23:35:51.617570Z","end":"2025-12-07T23:35:51.718345Z","steps":["trace[1813324635] 'agreement among raft nodes before linearized reading'  (duration: 81.676832ms)","trace[1813324635] 'range keys from in-memory index tree'  (duration: 18.724524ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-07T23:35:51.718386Z","caller":"traceutil/trace.go:172","msg":"trace[230611098] transaction","detail":"{read_only:false; response_revision:666; number_of_response:1; }","duration":"118.138837ms","start":"2025-12-07T23:35:51.600232Z","end":"2025-12-07T23:35:51.718371Z","steps":["trace[230611098] 'process raft request'  (duration: 99.064608ms)","trace[230611098] 'compare'  (duration: 18.856204ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-07T23:35:51.718369Z","caller":"traceutil/trace.go:172","msg":"trace[1879282581] transaction","detail":"{read_only:false; response_revision:667; number_of_response:1; }","duration":"114.490433ms","start":"2025-12-07T23:35:51.603862Z","end":"2025-12-07T23:35:51.718353Z","steps":["trace[1879282581] 'process raft request'  (duration: 114.413442ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:36:13 up  2:18,  0 user,  load average: 3.23, 2.37, 1.88
	Linux no-preload-313006 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [875984b7632065686e5488eaa175d1e9bc6f11d4ab18328ac4d3c2df479df442] <==
	I1207 23:35:21.038617       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 23:35:21.038870       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1207 23:35:21.039059       1 main.go:148] setting mtu 1500 for CNI 
	I1207 23:35:21.039079       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 23:35:21.039102       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T23:35:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 23:35:21.237110       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 23:35:21.336477       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 23:35:21.336542       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 23:35:21.336934       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 23:35:21.736685       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 23:35:21.736713       1 metrics.go:72] Registering metrics
	I1207 23:35:21.736807       1 controller.go:711] "Syncing nftables rules"
	I1207 23:35:31.237644       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1207 23:35:31.237722       1 main.go:301] handling current node
	I1207 23:35:41.237531       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1207 23:35:41.237578       1 main.go:301] handling current node
	I1207 23:35:51.245490       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1207 23:35:51.245524       1 main.go:301] handling current node
	I1207 23:36:01.240462       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1207 23:36:01.240493       1 main.go:301] handling current node
	I1207 23:36:11.238403       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1207 23:36:11.238450       1 main.go:301] handling current node
	
	
	==> kube-apiserver [087d0f5345ac825bcf193ab138e126157b165b5aa86f1b652afd90640d7fda6e] <==
	I1207 23:35:20.341825       1 cache.go:39] Caches are synced for autoregister controller
	I1207 23:35:20.341934       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1207 23:35:20.342020       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:20.342287       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1207 23:35:20.342975       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1207 23:35:20.343033       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1207 23:35:20.343350       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1207 23:35:20.343360       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1207 23:35:20.343531       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:20.349905       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1207 23:35:20.354010       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1207 23:35:20.365383       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 23:35:20.373773       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:35:20.577097       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 23:35:20.627185       1 controller.go:667] quota admission added evaluator for: namespaces
	I1207 23:35:20.652761       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 23:35:20.670951       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 23:35:20.677621       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 23:35:20.714447       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.119.12"}
	I1207 23:35:20.723874       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.43.92"}
	I1207 23:35:21.245030       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1207 23:35:23.938570       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 23:35:23.938617       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 23:35:23.988529       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:35:24.040416       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [404e1d5beb2da9d3cc45722c51fc2e1c7b0c587a72d76030ae16a0117eb8350a] <==
	I1207 23:35:23.492221       1 range_allocator.go:177] "Sending events to api server"
	I1207 23:35:23.492159       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492267       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492167       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492293       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492176       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492358       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492363       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492138       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492381       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492267       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1207 23:35:23.492400       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:35:23.492406       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492115       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492169       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-313006"
	I1207 23:35:23.492671       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492736       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492705       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1207 23:35:23.493041       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.500675       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.503076       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:35:23.592742       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.592763       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1207 23:35:23.592768       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1207 23:35:23.604023       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [393f33ab322dbe6524e1390a9b4b3524caaee37f8fd3322f5fa42afcba2d88c8] <==
	I1207 23:35:20.852603       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:35:20.926984       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:35:21.027150       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:21.027187       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1207 23:35:21.027265       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:35:21.047531       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:35:21.047599       1 server_linux.go:136] "Using iptables Proxier"
	I1207 23:35:21.053166       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:35:21.053618       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1207 23:35:21.053641       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:35:21.054871       1 config.go:200] "Starting service config controller"
	I1207 23:35:21.055266       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:35:21.054967       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:35:21.055300       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:35:21.055096       1 config.go:309] "Starting node config controller"
	I1207 23:35:21.055313       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:35:21.055319       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:35:21.054919       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:35:21.055343       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:35:21.156073       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 23:35:21.156094       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 23:35:21.156107       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1902052b7fa9a51b713591332e8f8f19d13383667710cc98390abfe859d91e2c] <==
	I1207 23:35:19.288474       1 serving.go:386] Generated self-signed cert in-memory
	W1207 23:35:20.272895       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 23:35:20.272951       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 23:35:20.272963       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 23:35:20.272972       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 23:35:20.297811       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1207 23:35:20.297841       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:35:20.300652       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:35:20.300730       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:35:20.300810       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 23:35:20.300949       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 23:35:20.401579       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 07 23:35:40 no-preload-313006 kubelet[724]: E1207 23:35:40.544648     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h" containerName="dashboard-metrics-scraper"
	Dec 07 23:35:40 no-preload-313006 kubelet[724]: I1207 23:35:40.544745     724 scope.go:122] "RemoveContainer" containerID="d86d5bc68a03178e36fcf86a2aa8dfeec1d0615e47d1e74c30c06b4324fc3485"
	Dec 07 23:35:40 no-preload-313006 kubelet[724]: E1207 23:35:40.544958     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7w27h_kubernetes-dashboard(d8ba85c8-2a4f-4a46-813e-d9ce71c0e7cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h" podUID="d8ba85c8-2a4f-4a46-813e-d9ce71c0e7cc"
	Dec 07 23:35:41 no-preload-313006 kubelet[724]: I1207 23:35:41.548860     724 scope.go:122] "RemoveContainer" containerID="47abf464763e71165bcdab4db1ebf65eb73a9e00bb6a4db90fb3163f12f3d1d5"
	Dec 07 23:35:41 no-preload-313006 kubelet[724]: E1207 23:35:41.549194     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h" containerName="dashboard-metrics-scraper"
	Dec 07 23:35:41 no-preload-313006 kubelet[724]: I1207 23:35:41.549216     724 scope.go:122] "RemoveContainer" containerID="d86d5bc68a03178e36fcf86a2aa8dfeec1d0615e47d1e74c30c06b4324fc3485"
	Dec 07 23:35:41 no-preload-313006 kubelet[724]: E1207 23:35:41.549425     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7w27h_kubernetes-dashboard(d8ba85c8-2a4f-4a46-813e-d9ce71c0e7cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h" podUID="d8ba85c8-2a4f-4a46-813e-d9ce71c0e7cc"
	Dec 07 23:35:48 no-preload-313006 kubelet[724]: E1207 23:35:48.682946     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h" containerName="dashboard-metrics-scraper"
	Dec 07 23:35:48 no-preload-313006 kubelet[724]: I1207 23:35:48.682991     724 scope.go:122] "RemoveContainer" containerID="d86d5bc68a03178e36fcf86a2aa8dfeec1d0615e47d1e74c30c06b4324fc3485"
	Dec 07 23:35:48 no-preload-313006 kubelet[724]: E1207 23:35:48.683181     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7w27h_kubernetes-dashboard(d8ba85c8-2a4f-4a46-813e-d9ce71c0e7cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h" podUID="d8ba85c8-2a4f-4a46-813e-d9ce71c0e7cc"
	Dec 07 23:35:51 no-preload-313006 kubelet[724]: I1207 23:35:51.582866     724 scope.go:122] "RemoveContainer" containerID="2c733f7f60399147a390c6e21cbb293e3dd549fd6dc613363b85209ca503d959"
	Dec 07 23:35:56 no-preload-313006 kubelet[724]: E1207 23:35:56.716661     724 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-btjrp" containerName="coredns"
	Dec 07 23:36:05 no-preload-313006 kubelet[724]: E1207 23:36:05.457541     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h" containerName="dashboard-metrics-scraper"
	Dec 07 23:36:05 no-preload-313006 kubelet[724]: I1207 23:36:05.457596     724 scope.go:122] "RemoveContainer" containerID="d86d5bc68a03178e36fcf86a2aa8dfeec1d0615e47d1e74c30c06b4324fc3485"
	Dec 07 23:36:05 no-preload-313006 kubelet[724]: I1207 23:36:05.624862     724 scope.go:122] "RemoveContainer" containerID="d86d5bc68a03178e36fcf86a2aa8dfeec1d0615e47d1e74c30c06b4324fc3485"
	Dec 07 23:36:05 no-preload-313006 kubelet[724]: E1207 23:36:05.625105     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h" containerName="dashboard-metrics-scraper"
	Dec 07 23:36:05 no-preload-313006 kubelet[724]: I1207 23:36:05.625140     724 scope.go:122] "RemoveContainer" containerID="956668bdbf8d201d97440dac258e060ce7444a7f759273e89cb0b00bce91fbe0"
	Dec 07 23:36:05 no-preload-313006 kubelet[724]: E1207 23:36:05.625360     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7w27h_kubernetes-dashboard(d8ba85c8-2a4f-4a46-813e-d9ce71c0e7cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h" podUID="d8ba85c8-2a4f-4a46-813e-d9ce71c0e7cc"
	Dec 07 23:36:08 no-preload-313006 kubelet[724]: E1207 23:36:08.683284     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h" containerName="dashboard-metrics-scraper"
	Dec 07 23:36:08 no-preload-313006 kubelet[724]: I1207 23:36:08.683353     724 scope.go:122] "RemoveContainer" containerID="956668bdbf8d201d97440dac258e060ce7444a7f759273e89cb0b00bce91fbe0"
	Dec 07 23:36:08 no-preload-313006 kubelet[724]: E1207 23:36:08.683570     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7w27h_kubernetes-dashboard(d8ba85c8-2a4f-4a46-813e-d9ce71c0e7cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h" podUID="d8ba85c8-2a4f-4a46-813e-d9ce71c0e7cc"
	Dec 07 23:36:10 no-preload-313006 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 07 23:36:10 no-preload-313006 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 07 23:36:10 no-preload-313006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 07 23:36:10 no-preload-313006 systemd[1]: kubelet.service: Consumed 1.771s CPU time.
	
	
	==> kubernetes-dashboard [8a4e2c23a171e4e01d7e5be0846972a8e83d5db6e5feebf9d7658400cf5cf62e] <==
	2025/12/07 23:35:30 Using namespace: kubernetes-dashboard
	2025/12/07 23:35:30 Using in-cluster config to connect to apiserver
	2025/12/07 23:35:30 Using secret token for csrf signing
	2025/12/07 23:35:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/07 23:35:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/07 23:35:30 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/07 23:35:30 Generating JWE encryption key
	2025/12/07 23:35:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/07 23:35:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/07 23:35:30 Initializing JWE encryption key from synchronized object
	2025/12/07 23:35:30 Creating in-cluster Sidecar client
	2025/12/07 23:35:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/07 23:35:30 Serving insecurely on HTTP port: 9090
	2025/12/07 23:36:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/07 23:35:30 Starting overwatch
	
	
	==> storage-provisioner [2c733f7f60399147a390c6e21cbb293e3dd549fd6dc613363b85209ca503d959] <==
	I1207 23:35:20.815603       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1207 23:35:50.819165       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9d70771c342e0e6a8b340491d36ea107bf8abe93159eff71b6b33c5a89df58be] <==
	I1207 23:35:52.945515       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 23:35:52.953253       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 23:35:52.953306       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1207 23:35:52.955567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:35:56.411199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:00.671544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:04.269937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:07.323685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:10.347360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:10.353238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 23:36:10.353441       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 23:36:10.353652       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-313006_3393b33c-f65f-4aee-ba6a-fdc018c105b9!
	I1207 23:36:10.354846       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"27117f0f-4148-42d8-a5da-bf1f690374b0", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-313006_3393b33c-f65f-4aee-ba6a-fdc018c105b9 became leader
	W1207 23:36:10.360948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:10.370389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 23:36:10.454267       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-313006_3393b33c-f65f-4aee-ba6a-fdc018c105b9!
	W1207 23:36:12.374169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:12.378775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-313006 -n no-preload-313006
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-313006 -n no-preload-313006: exit status 2 (388.957756ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-313006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-313006
helpers_test.go:243: (dbg) docker inspect no-preload-313006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f2f71b478561f7677a512d83b239743d3a12195edf06004fa5e71d67fe6faa28",
	        "Created": "2025-12-07T23:33:56.743918699Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 656576,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:35:12.209081803Z",
	            "FinishedAt": "2025-12-07T23:35:10.472530731Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/f2f71b478561f7677a512d83b239743d3a12195edf06004fa5e71d67fe6faa28/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f2f71b478561f7677a512d83b239743d3a12195edf06004fa5e71d67fe6faa28/hostname",
	        "HostsPath": "/var/lib/docker/containers/f2f71b478561f7677a512d83b239743d3a12195edf06004fa5e71d67fe6faa28/hosts",
	        "LogPath": "/var/lib/docker/containers/f2f71b478561f7677a512d83b239743d3a12195edf06004fa5e71d67fe6faa28/f2f71b478561f7677a512d83b239743d3a12195edf06004fa5e71d67fe6faa28-json.log",
	        "Name": "/no-preload-313006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-313006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-313006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f2f71b478561f7677a512d83b239743d3a12195edf06004fa5e71d67fe6faa28",
	                "LowerDir": "/var/lib/docker/overlay2/3127bde15e4dc2f4657d8e4018b5da1f90b377ad2f68b2bb2e943541b2587371-init/diff:/var/lib/docker/overlay2/d2e9c5481c0f5ed3745e4b3c85b207e8e3f273f5a1d285f7bc7bfa20976ad16e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3127bde15e4dc2f4657d8e4018b5da1f90b377ad2f68b2bb2e943541b2587371/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3127bde15e4dc2f4657d8e4018b5da1f90b377ad2f68b2bb2e943541b2587371/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3127bde15e4dc2f4657d8e4018b5da1f90b377ad2f68b2bb2e943541b2587371/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-313006",
	                "Source": "/var/lib/docker/volumes/no-preload-313006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-313006",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-313006",
	                "name.minikube.sigs.k8s.io": "no-preload-313006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e03649c3ddb92a2a229325c642c3325d1bb9416a5abb1aad0119efbdce0c62e5",
	            "SandboxKey": "/var/run/docker/netns/e03649c3ddb9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-313006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "357321d5a31d4d37dba08f8b7360dac5f2baa6c86fc4940023c2b5c75f1a37a8",
	                    "EndpointID": "c8946e407556a5aef14e5f12a07b118cd3df0fa82f16dd9cd55bdb622caa6205",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "ba:13:94:ff:bc:79",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-313006",
	                        "f2f71b478561"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-313006 -n no-preload-313006
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-313006 -n no-preload-313006: exit status 2 (349.254312ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-313006 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-313006 logs -n 25: (1.231848314s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p no-preload-313006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:33 UTC │ 07 Dec 25 23:34 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-320477 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │                     │
	│ stop    │ -p old-k8s-version-320477 --alsologtostderr -v=3                                                                                                                                                                                                     │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:34 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-320477 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:34 UTC │
	│ start   │ -p old-k8s-version-320477 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:35 UTC │
	│ delete  │ -p stopped-upgrade-604160                                                                                                                                                                                                                            │ stopped-upgrade-604160       │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:34 UTC │
	│ start   │ -p embed-certs-654118 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-313006 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │                     │
	│ stop    │ -p no-preload-313006 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:35 UTC │
	│ addons  │ enable dashboard -p no-preload-313006 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p no-preload-313006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ image   │ old-k8s-version-320477 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ pause   │ -p old-k8s-version-320477 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ delete  │ -p old-k8s-version-320477                                                                                                                                                                                                                            │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p kubernetes-upgrade-703538 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-703538    │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ start   │ -p kubernetes-upgrade-703538 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-703538    │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ delete  │ -p old-k8s-version-320477                                                                                                                                                                                                                            │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ delete  │ -p disable-driver-mounts-837628                                                                                                                                                                                                                      │ disable-driver-mounts-837628 │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p default-k8s-diff-port-312944 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-312944 │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-703538                                                                                                                                                                                                                         │ kubernetes-upgrade-703538    │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p newest-cni-858719 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-654118 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ stop    │ -p embed-certs-654118 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	│ image   │ no-preload-313006 image list --format=json                                                                                                                                                                                                           │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ pause   │ -p no-preload-313006 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:35:57
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:35:57.632024  665837 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:35:57.632148  665837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:35:57.632156  665837 out.go:374] Setting ErrFile to fd 2...
	I1207 23:35:57.632161  665837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:35:57.632377  665837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:35:57.632861  665837 out.go:368] Setting JSON to false
	I1207 23:35:57.634012  665837 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8302,"bootTime":1765142256,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:35:57.634076  665837 start.go:143] virtualization: kvm guest
	I1207 23:35:57.636222  665837 out.go:179] * [newest-cni-858719] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:35:57.637537  665837 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:35:57.637601  665837 notify.go:221] Checking for updates...
	I1207 23:35:57.639940  665837 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:35:57.641367  665837 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:35:57.642948  665837 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:35:57.644532  665837 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:35:57.646052  665837 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:35:57.648187  665837 config.go:182] Loaded profile config "default-k8s-diff-port-312944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:35:57.648313  665837 config.go:182] Loaded profile config "embed-certs-654118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:35:57.648457  665837 config.go:182] Loaded profile config "no-preload-313006": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:35:57.648599  665837 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:35:57.673911  665837 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:35:57.674024  665837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:35:57.734477  665837 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-07 23:35:57.723366575 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:35:57.734675  665837 docker.go:319] overlay module found
	I1207 23:35:57.736420  665837 out.go:179] * Using the docker driver based on user configuration
	I1207 23:35:57.737776  665837 start.go:309] selected driver: docker
	I1207 23:35:57.737790  665837 start.go:927] validating driver "docker" against <nil>
	I1207 23:35:57.737804  665837 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:35:57.738404  665837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:35:57.800588  665837 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-07 23:35:57.790071121 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:35:57.800741  665837 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1207 23:35:57.800777  665837 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1207 23:35:57.801009  665837 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1207 23:35:57.803264  665837 out.go:179] * Using Docker driver with root privileges
	I1207 23:35:57.804574  665837 cni.go:84] Creating CNI manager for ""
	I1207 23:35:57.804645  665837 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:35:57.804658  665837 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1207 23:35:57.804753  665837 start.go:353] cluster config:
	{Name:newest-cni-858719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:35:57.806078  665837 out.go:179] * Starting "newest-cni-858719" primary control-plane node in "newest-cni-858719" cluster
	I1207 23:35:57.807183  665837 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:35:57.808220  665837 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:35:57.809361  665837 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1207 23:35:57.809405  665837 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1207 23:35:57.809414  665837 cache.go:65] Caching tarball of preloaded images
	I1207 23:35:57.809487  665837 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:35:57.809505  665837 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:35:57.809517  665837 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1207 23:35:57.809622  665837 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/config.json ...
	I1207 23:35:57.809642  665837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/config.json: {Name:mk58abd3aba696b237e078949efd134e91598be6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:35:57.833660  665837 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:35:57.833685  665837 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:35:57.833709  665837 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:35:57.833753  665837 start.go:360] acquireMachinesLock for newest-cni-858719: {Name:mk3f9783a06cd72eff911e9615fc59e854b06695 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:35:57.834702  665837 start.go:364] duration metric: took 917.515µs to acquireMachinesLock for "newest-cni-858719"
	I1207 23:35:57.834748  665837 start.go:93] Provisioning new machine with config: &{Name:newest-cni-858719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:35:57.834842  665837 start.go:125] createHost starting for "" (driver="docker")
	I1207 23:35:57.086490  656318 pod_ready.go:94] pod "kube-controller-manager-no-preload-313006" is "Ready"
	I1207 23:35:57.086526  656318 pod_ready.go:86] duration metric: took 184.52001ms for pod "kube-controller-manager-no-preload-313006" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:57.285885  656318 pod_ready.go:83] waiting for pod "kube-proxy-xw4pf" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:57.686019  656318 pod_ready.go:94] pod "kube-proxy-xw4pf" is "Ready"
	I1207 23:35:57.686045  656318 pod_ready.go:86] duration metric: took 400.132494ms for pod "kube-proxy-xw4pf" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:57.886678  656318 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-313006" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:58.286146  656318 pod_ready.go:94] pod "kube-scheduler-no-preload-313006" is "Ready"
	I1207 23:35:58.286179  656318 pod_ready.go:86] duration metric: took 399.470825ms for pod "kube-scheduler-no-preload-313006" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:35:58.286194  656318 pod_ready.go:40] duration metric: took 36.408049997s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:35:58.340973  656318 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1207 23:35:58.342970  656318 out.go:179] * Done! kubectl is now configured to use "no-preload-313006" cluster and "default" namespace by default
	I1207 23:35:53.513092  663227 cli_runner.go:164] Run: docker exec default-k8s-diff-port-312944 stat /var/lib/dpkg/alternatives/iptables
	I1207 23:35:53.569664  663227 oci.go:144] the created container "default-k8s-diff-port-312944" has a running status.
	I1207 23:35:53.569699  663227 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa...
	I1207 23:35:53.616194  663227 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1207 23:35:53.654955  663227 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-312944 --format={{.State.Status}}
	I1207 23:35:53.677815  663227 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1207 23:35:53.677836  663227 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-312944 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1207 23:35:53.734256  663227 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-312944 --format={{.State.Status}}
	I1207 23:35:53.756551  663227 machine.go:94] provisionDockerMachine start ...
	I1207 23:35:53.756699  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:53.777538  663227 main.go:143] libmachine: Using SSH client type: native
	I1207 23:35:53.777885  663227 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1207 23:35:53.777903  663227 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:35:53.778498  663227 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33252->127.0.0.1:33453: read: connection reset by peer
	I1207 23:35:56.912375  663227 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-312944
	
	I1207 23:35:56.912407  663227 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-312944"
	I1207 23:35:56.912481  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:56.933722  663227 main.go:143] libmachine: Using SSH client type: native
	I1207 23:35:56.933966  663227 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1207 23:35:56.933978  663227 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-312944 && echo "default-k8s-diff-port-312944" | sudo tee /etc/hostname
	I1207 23:35:57.089581  663227 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-312944
	
	I1207 23:35:57.089671  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:57.108882  663227 main.go:143] libmachine: Using SSH client type: native
	I1207 23:35:57.109181  663227 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1207 23:35:57.109209  663227 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-312944' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-312944/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-312944' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:35:57.239382  663227 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:35:57.239416  663227 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:35:57.239450  663227 ubuntu.go:190] setting up certificates
	I1207 23:35:57.239464  663227 provision.go:84] configureAuth start
	I1207 23:35:57.239537  663227 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-312944
	I1207 23:35:57.259204  663227 provision.go:143] copyHostCerts
	I1207 23:35:57.259266  663227 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:35:57.259275  663227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:35:57.259370  663227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:35:57.259494  663227 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:35:57.259504  663227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:35:57.259547  663227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:35:57.259610  663227 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:35:57.259617  663227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:35:57.259644  663227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:35:57.259709  663227 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-312944 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-312944 localhost minikube]
	I1207 23:35:57.380006  663227 provision.go:177] copyRemoteCerts
	I1207 23:35:57.380201  663227 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:35:57.380362  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:57.400600  663227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:35:57.514388  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:35:57.542751  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1207 23:35:57.561877  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 23:35:57.580928  663227 provision.go:87] duration metric: took 341.449385ms to configureAuth
	I1207 23:35:57.580959  663227 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:35:57.581113  663227 config.go:182] Loaded profile config "default-k8s-diff-port-312944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:35:57.581208  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:57.601315  663227 main.go:143] libmachine: Using SSH client type: native
	I1207 23:35:57.601571  663227 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1207 23:35:57.601587  663227 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:35:57.900137  663227 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:35:57.900168  663227 machine.go:97] duration metric: took 4.143590275s to provisionDockerMachine
	I1207 23:35:57.900181  663227 client.go:176] duration metric: took 9.197426744s to LocalClient.Create
	I1207 23:35:57.900203  663227 start.go:167] duration metric: took 9.197496265s to libmachine.API.Create "default-k8s-diff-port-312944"
	I1207 23:35:57.900219  663227 start.go:293] postStartSetup for "default-k8s-diff-port-312944" (driver="docker")
	I1207 23:35:57.900236  663227 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:35:57.900318  663227 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:35:57.900402  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:57.923155  663227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:35:58.027598  663227 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:35:58.031781  663227 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:35:58.031812  663227 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:35:58.031825  663227 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:35:58.031877  663227 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:35:58.031973  663227 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:35:58.032092  663227 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:35:58.040270  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:35:58.063465  663227 start.go:296] duration metric: took 163.229604ms for postStartSetup
	I1207 23:35:58.063866  663227 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-312944
	I1207 23:35:58.087920  663227 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/config.json ...
	I1207 23:35:58.088259  663227 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:35:58.088304  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:58.109143  663227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:35:58.205010  663227 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:35:58.210080  663227 start.go:128] duration metric: took 9.50986297s to createHost
	I1207 23:35:58.210108  663227 start.go:83] releasing machines lock for "default-k8s-diff-port-312944", held for 9.51001628s
	I1207 23:35:58.210186  663227 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-312944
	I1207 23:35:58.230428  663227 ssh_runner.go:195] Run: cat /version.json
	I1207 23:35:58.230495  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:58.230505  663227 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:35:58.230600  663227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:35:58.251090  663227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:35:58.251094  663227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:35:58.351196  663227 ssh_runner.go:195] Run: systemctl --version
	I1207 23:35:58.444301  663227 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:35:58.490592  663227 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:35:58.497308  663227 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:35:58.497393  663227 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:35:58.529982  663227 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 23:35:58.530012  663227 start.go:496] detecting cgroup driver to use...
	I1207 23:35:58.530050  663227 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:35:58.530103  663227 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:35:58.550131  663227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:35:58.565080  663227 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:35:58.565145  663227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:35:58.585947  663227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:35:58.611485  663227 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:35:58.719375  663227 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:35:58.824231  663227 docker.go:234] disabling docker service ...
	I1207 23:35:58.824292  663227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:35:58.846221  663227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:35:58.861203  663227 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:35:58.952053  663227 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:35:59.048221  663227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:35:59.063271  663227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:35:59.078250  663227 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:35:59.078307  663227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:59.091512  663227 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:35:59.091585  663227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:59.102342  663227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:59.112767  663227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:59.122283  663227 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:35:59.132267  663227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:59.141985  663227 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:59.158203  663227 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:35:59.169026  663227 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:35:59.178463  663227 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:35:59.187956  663227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:35:59.283304  663227 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:36:00.900658  663227 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.617290285s)
	I1207 23:36:00.900698  663227 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:36:00.900755  663227 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:36:00.906075  663227 start.go:564] Will wait 60s for crictl version
	I1207 23:36:00.906135  663227 ssh_runner.go:195] Run: which crictl
	I1207 23:36:00.910783  663227 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:36:00.941450  663227 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:36:00.941556  663227 ssh_runner.go:195] Run: crio --version
	I1207 23:36:00.974038  663227 ssh_runner.go:195] Run: crio --version
	I1207 23:36:01.009347  663227 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1207 23:35:57.837385  665837 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1207 23:35:57.837699  665837 start.go:159] libmachine.API.Create for "newest-cni-858719" (driver="docker")
	I1207 23:35:57.837745  665837 client.go:173] LocalClient.Create starting
	I1207 23:35:57.837823  665837 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem
	I1207 23:35:57.837864  665837 main.go:143] libmachine: Decoding PEM data...
	I1207 23:35:57.837889  665837 main.go:143] libmachine: Parsing certificate...
	I1207 23:35:57.837978  665837 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem
	I1207 23:35:57.838028  665837 main.go:143] libmachine: Decoding PEM data...
	I1207 23:35:57.838047  665837 main.go:143] libmachine: Parsing certificate...
	I1207 23:35:57.838510  665837 cli_runner.go:164] Run: docker network inspect newest-cni-858719 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1207 23:35:57.858561  665837 cli_runner.go:211] docker network inspect newest-cni-858719 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1207 23:35:57.858664  665837 network_create.go:284] running [docker network inspect newest-cni-858719] to gather additional debugging logs...
	I1207 23:35:57.858692  665837 cli_runner.go:164] Run: docker network inspect newest-cni-858719
	W1207 23:35:57.876519  665837 cli_runner.go:211] docker network inspect newest-cni-858719 returned with exit code 1
	I1207 23:35:57.876574  665837 network_create.go:287] error running [docker network inspect newest-cni-858719]: docker network inspect newest-cni-858719: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-858719 not found
	I1207 23:35:57.876605  665837 network_create.go:289] output of [docker network inspect newest-cni-858719]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-858719 not found
	
	** /stderr **
	I1207 23:35:57.876734  665837 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:35:57.897599  665837 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-918c8f4f6e86 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:f0:02:fe:94:4b} reservation:<nil>}
	I1207 23:35:57.898673  665837 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ce07fb07c16c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:d2:35:46:a2:0a} reservation:<nil>}
	I1207 23:35:57.899174  665837 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f198eadca31e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:79:39:d6:10:dc} reservation:<nil>}
	I1207 23:35:57.900387  665837 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e24c30}
	I1207 23:35:57.900433  665837 network_create.go:124] attempt to create docker network newest-cni-858719 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1207 23:35:57.900511  665837 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-858719 newest-cni-858719
	I1207 23:35:57.954885  665837 network_create.go:108] docker network newest-cni-858719 192.168.76.0/24 created
	I1207 23:35:57.954925  665837 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-858719" container
	I1207 23:35:57.955000  665837 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1207 23:35:57.974658  665837 cli_runner.go:164] Run: docker volume create newest-cni-858719 --label name.minikube.sigs.k8s.io=newest-cni-858719 --label created_by.minikube.sigs.k8s.io=true
	I1207 23:35:57.993281  665837 oci.go:103] Successfully created a docker volume newest-cni-858719
	I1207 23:35:57.993386  665837 cli_runner.go:164] Run: docker run --rm --name newest-cni-858719-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-858719 --entrypoint /usr/bin/test -v newest-cni-858719:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1207 23:35:58.442831  665837 oci.go:107] Successfully prepared a docker volume newest-cni-858719
	I1207 23:35:58.442922  665837 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1207 23:35:58.442941  665837 kic.go:194] Starting extracting preloaded images to volume ...
	I1207 23:35:58.443034  665837 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-858719:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1207 23:36:01.587032  665837 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-858719:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.143929303s)
	I1207 23:36:01.587075  665837 kic.go:203] duration metric: took 3.14412959s to extract preloaded images to volume ...
	W1207 23:36:01.587168  665837 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1207 23:36:01.587222  665837 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1207 23:36:01.587272  665837 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1207 23:36:01.651169  665837 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-858719 --name newest-cni-858719 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-858719 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-858719 --network newest-cni-858719 --ip 192.168.76.2 --volume newest-cni-858719:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1207 23:36:01.960842  665837 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Running}}
	I1207 23:36:01.981910  665837 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:02.003946  665837 cli_runner.go:164] Run: docker exec newest-cni-858719 stat /var/lib/dpkg/alternatives/iptables
	I1207 23:36:02.058757  665837 oci.go:144] the created container "newest-cni-858719" has a running status.
	I1207 23:36:02.058789  665837 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa...
	I1207 23:36:02.200970  665837 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1207 23:36:02.239696  665837 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:02.267119  665837 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1207 23:36:02.267154  665837 kic_runner.go:114] Args: [docker exec --privileged newest-cni-858719 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1207 23:36:02.338750  665837 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:02.363303  665837 machine.go:94] provisionDockerMachine start ...
	I1207 23:36:02.363460  665837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:02.387599  665837 main.go:143] libmachine: Using SSH client type: native
	I1207 23:36:02.387949  665837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1207 23:36:02.387977  665837 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:36:02.529963  665837 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-858719
	
	I1207 23:36:02.529990  665837 ubuntu.go:182] provisioning hostname "newest-cni-858719"
	I1207 23:36:02.530052  665837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:02.554476  665837 main.go:143] libmachine: Using SSH client type: native
	I1207 23:36:02.554931  665837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1207 23:36:02.554949  665837 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-858719 && echo "newest-cni-858719" | sudo tee /etc/hostname
	I1207 23:36:01.010822  663227 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-312944 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:36:01.034931  663227 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1207 23:36:01.040248  663227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:36:01.051249  663227 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-312944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-312944 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:36:01.051460  663227 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:36:01.051545  663227 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:36:01.088728  663227 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:36:01.088758  663227 crio.go:433] Images already preloaded, skipping extraction
	I1207 23:36:01.088815  663227 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:36:01.121286  663227 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:36:01.121309  663227 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:36:01.121318  663227 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.2 crio true true} ...
	I1207 23:36:01.121424  663227 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-312944 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-312944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:36:01.121491  663227 ssh_runner.go:195] Run: crio config
	I1207 23:36:01.178887  663227 cni.go:84] Creating CNI manager for ""
	I1207 23:36:01.178914  663227 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:36:01.178938  663227 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 23:36:01.179025  663227 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-312944 NodeName:default-k8s-diff-port-312944 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:36:01.179239  663227 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-312944"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:36:01.179532  663227 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:36:01.189813  663227 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:36:01.189950  663227 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 23:36:01.200736  663227 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1207 23:36:01.217998  663227 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:36:01.327130  663227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1207 23:36:01.341947  663227 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1207 23:36:01.346354  663227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:36:01.412358  663227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:36:01.521527  663227 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:36:01.549890  663227 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944 for IP: 192.168.94.2
	I1207 23:36:01.549911  663227 certs.go:195] generating shared ca certs ...
	I1207 23:36:01.549928  663227 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:01.550081  663227 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:36:01.550150  663227 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:36:01.550161  663227 certs.go:257] generating profile certs ...
	I1207 23:36:01.550242  663227 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/client.key
	I1207 23:36:01.550259  663227 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/client.crt with IP's: []
	I1207 23:36:01.675649  663227 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/client.crt ...
	I1207 23:36:01.675688  663227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/client.crt: {Name:mka8498bebd3154217aba57e65c364430d9492be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:01.675892  663227 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/client.key ...
	I1207 23:36:01.675918  663227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/client.key: {Name:mka99e968c159a29eb845d9bc469095c2d3c0f20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:01.676054  663227 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.key.025605fa
	I1207 23:36:01.676079  663227 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.crt.025605fa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1207 23:36:01.794967  663227 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.crt.025605fa ...
	I1207 23:36:01.795003  663227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.crt.025605fa: {Name:mkf035a40b177f7648e0882d28e7999ec530dbf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:01.795163  663227 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.key.025605fa ...
	I1207 23:36:01.795177  663227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.key.025605fa: {Name:mk7ffe413bfcab5db395a27d40c70ef2413838d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:01.795259  663227 certs.go:382] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.crt.025605fa -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.crt
	I1207 23:36:01.795352  663227 certs.go:386] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.key.025605fa -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.key
	I1207 23:36:01.795422  663227 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/proxy-client.key
	I1207 23:36:01.795441  663227 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/proxy-client.crt with IP's: []
	I1207 23:36:01.870314  663227 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/proxy-client.crt ...
	I1207 23:36:01.870360  663227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/proxy-client.crt: {Name:mka1b99d7a1d1582ba7af59029ce88fcd4691dde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:01.870557  663227 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/proxy-client.key ...
	I1207 23:36:01.870575  663227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/proxy-client.key: {Name:mkc0cd79e929210ed866cd63dbcbdd822785859c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:01.870788  663227 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:36:01.870841  663227 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:36:01.870848  663227 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:36:01.870870  663227 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:36:01.870894  663227 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:36:01.870917  663227 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:36:01.870973  663227 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:36:01.871802  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:36:01.892700  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:36:01.914757  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:36:01.935631  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:36:01.958032  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1207 23:36:01.978816  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 23:36:02.004227  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:36:02.029404  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 23:36:02.050868  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:36:02.076246  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:36:02.097049  663227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:36:02.119763  663227 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:36:02.136851  663227 ssh_runner.go:195] Run: openssl version
	I1207 23:36:02.143442  663227 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:02.151407  663227 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:36:02.159022  663227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:02.163159  663227 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:02.163227  663227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:02.214043  663227 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:36:02.226767  663227 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1207 23:36:02.240376  663227 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:36:02.250985  663227 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:36:02.263110  663227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:36:02.269140  663227 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:36:02.269303  663227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:36:02.346026  663227 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:36:02.358157  663227 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/393125.pem /etc/ssl/certs/51391683.0
	I1207 23:36:02.369394  663227 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:36:02.381286  663227 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:36:02.393663  663227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:36:02.399245  663227 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:36:02.399308  663227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:36:02.448268  663227 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:36:02.457756  663227 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3931252.pem /etc/ssl/certs/3ec20f2e.0
	I1207 23:36:02.467276  663227 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:36:02.471972  663227 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1207 23:36:02.472039  663227 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-312944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-312944 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:36:02.472125  663227 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:36:02.472163  663227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:36:02.504544  663227 cri.go:89] found id: ""
	I1207 23:36:02.504607  663227 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:36:02.514004  663227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 23:36:02.523079  663227 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1207 23:36:02.523139  663227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 23:36:02.532543  663227 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 23:36:02.532564  663227 kubeadm.go:158] found existing configuration files:
	
	I1207 23:36:02.532607  663227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1207 23:36:02.542265  663227 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1207 23:36:02.542386  663227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1207 23:36:02.551752  663227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1207 23:36:02.562486  663227 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1207 23:36:02.562546  663227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1207 23:36:02.572678  663227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1207 23:36:02.584922  663227 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1207 23:36:02.584992  663227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1207 23:36:02.593892  663227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1207 23:36:02.603546  663227 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1207 23:36:02.603597  663227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1207 23:36:02.613207  663227 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1207 23:36:02.658937  663227 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1207 23:36:02.659104  663227 kubeadm.go:319] [preflight] Running pre-flight checks
	I1207 23:36:02.683465  663227 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1207 23:36:02.683537  663227 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1207 23:36:02.683586  663227 kubeadm.go:319] OS: Linux
	I1207 23:36:02.683657  663227 kubeadm.go:319] CGROUPS_CPU: enabled
	I1207 23:36:02.683761  663227 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1207 23:36:02.683867  663227 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1207 23:36:02.683947  663227 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1207 23:36:02.684040  663227 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1207 23:36:02.684132  663227 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1207 23:36:02.684246  663227 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1207 23:36:02.684364  663227 kubeadm.go:319] CGROUPS_IO: enabled
	I1207 23:36:02.772158  663227 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 23:36:02.772311  663227 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 23:36:02.772457  663227 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1207 23:36:02.783844  663227 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 23:36:02.786437  663227 out.go:252]   - Generating certificates and keys ...
	I1207 23:36:02.786586  663227 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1207 23:36:02.786694  663227 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1207 23:36:02.705898  665837 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-858719
	
	I1207 23:36:02.706000  665837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:02.728491  665837 main.go:143] libmachine: Using SSH client type: native
	I1207 23:36:02.728775  665837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1207 23:36:02.728805  665837 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-858719' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-858719/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-858719' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:36:02.872237  665837 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:36:02.872277  665837 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:36:02.872305  665837 ubuntu.go:190] setting up certificates
	I1207 23:36:02.872341  665837 provision.go:84] configureAuth start
	I1207 23:36:02.872421  665837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-858719
	I1207 23:36:02.892316  665837 provision.go:143] copyHostCerts
	I1207 23:36:02.892393  665837 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:36:02.892402  665837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:36:02.892516  665837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:36:02.892646  665837 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:36:02.892664  665837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:36:02.892706  665837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:36:02.892798  665837 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:36:02.892810  665837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:36:02.892845  665837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:36:02.892914  665837 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.newest-cni-858719 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-858719]
	I1207 23:36:03.050587  665837 provision.go:177] copyRemoteCerts
	I1207 23:36:03.050669  665837 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:36:03.050721  665837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:03.070035  665837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:03.165012  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:36:03.185451  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1207 23:36:03.203690  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 23:36:03.222074  665837 provision.go:87] duration metric: took 349.709006ms to configureAuth
	I1207 23:36:03.222103  665837 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:36:03.222391  665837 config.go:182] Loaded profile config "newest-cni-858719": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:36:03.222524  665837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:03.241380  665837 main.go:143] libmachine: Using SSH client type: native
	I1207 23:36:03.241606  665837 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1207 23:36:03.241621  665837 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:36:03.515191  665837 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:36:03.515218  665837 machine.go:97] duration metric: took 1.151891491s to provisionDockerMachine
	I1207 23:36:03.515227  665837 client.go:176] duration metric: took 5.677472406s to LocalClient.Create
	I1207 23:36:03.515243  665837 start.go:167] duration metric: took 5.677548642s to libmachine.API.Create "newest-cni-858719"
	I1207 23:36:03.515251  665837 start.go:293] postStartSetup for "newest-cni-858719" (driver="docker")
	I1207 23:36:03.515267  665837 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:36:03.515351  665837 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:36:03.515402  665837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:03.533709  665837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:03.630870  665837 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:36:03.635066  665837 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:36:03.635097  665837 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:36:03.635112  665837 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:36:03.635169  665837 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:36:03.635266  665837 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:36:03.635427  665837 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:36:03.643491  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:36:03.665613  665837 start.go:296] duration metric: took 150.343557ms for postStartSetup
	I1207 23:36:03.665988  665837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-858719
	I1207 23:36:03.685535  665837 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/config.json ...
	I1207 23:36:03.685798  665837 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:36:03.685840  665837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:03.704761  665837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:03.800788  665837 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:36:03.806954  665837 start.go:128] duration metric: took 5.972090169s to createHost
	I1207 23:36:03.806991  665837 start.go:83] releasing machines lock for "newest-cni-858719", held for 5.972264495s
	I1207 23:36:03.807085  665837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-858719
	I1207 23:36:03.828719  665837 ssh_runner.go:195] Run: cat /version.json
	I1207 23:36:03.828755  665837 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:36:03.828780  665837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:03.828863  665837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:03.851741  665837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:03.853088  665837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:04.013379  665837 ssh_runner.go:195] Run: systemctl --version
	I1207 23:36:04.020512  665837 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:36:04.062124  665837 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:36:04.068394  665837 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:36:04.068478  665837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:36:04.098889  665837 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 23:36:04.098918  665837 start.go:496] detecting cgroup driver to use...
	I1207 23:36:04.098952  665837 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:36:04.099002  665837 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:36:04.120631  665837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:36:04.134242  665837 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:36:04.134311  665837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:36:04.151593  665837 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:36:04.173137  665837 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:36:04.268487  665837 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:36:04.357170  665837 docker.go:234] disabling docker service ...
	I1207 23:36:04.357234  665837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:36:04.376914  665837 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:36:04.390376  665837 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:36:04.477949  665837 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:36:04.572394  665837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:36:04.585670  665837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:36:04.599761  665837 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:36:04.599839  665837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:04.610153  665837 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:36:04.610221  665837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:04.619204  665837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:04.628444  665837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:04.637136  665837 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:36:04.646036  665837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:04.655665  665837 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:04.670198  665837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:04.679196  665837 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:36:04.686802  665837 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:36:04.694408  665837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:36:04.776822  665837 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:36:04.915221  665837 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:36:04.915298  665837 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:36:04.919552  665837 start.go:564] Will wait 60s for crictl version
	I1207 23:36:04.919612  665837 ssh_runner.go:195] Run: which crictl
	I1207 23:36:04.923744  665837 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:36:04.949075  665837 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:36:04.949172  665837 ssh_runner.go:195] Run: crio --version
	I1207 23:36:04.978314  665837 ssh_runner.go:195] Run: crio --version
	I1207 23:36:05.012522  665837 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1207 23:36:05.013994  665837 cli_runner.go:164] Run: docker network inspect newest-cni-858719 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:36:05.032995  665837 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1207 23:36:05.037347  665837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:36:05.049520  665837 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1207 23:36:05.050792  665837 kubeadm.go:884] updating cluster {Name:newest-cni-858719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:36:05.050937  665837 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1207 23:36:05.051041  665837 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:36:05.083782  665837 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:36:05.083802  665837 crio.go:433] Images already preloaded, skipping extraction
	I1207 23:36:05.083847  665837 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:36:05.110679  665837 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:36:05.110702  665837 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:36:05.110710  665837 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1207 23:36:05.110797  665837 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-858719 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:36:05.110871  665837 ssh_runner.go:195] Run: crio config
	I1207 23:36:05.156683  665837 cni.go:84] Creating CNI manager for ""
	I1207 23:36:05.156713  665837 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:36:05.156737  665837 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1207 23:36:05.156771  665837 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-858719 NodeName:newest-cni-858719 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:36:05.156939  665837 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-858719"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:36:05.157019  665837 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1207 23:36:05.165120  665837 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:36:05.165195  665837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 23:36:05.172977  665837 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1207 23:36:05.185886  665837 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1207 23:36:05.201903  665837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1207 23:36:05.215261  665837 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1207 23:36:05.219158  665837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:36:05.229705  665837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:36:05.334863  665837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:36:05.359142  665837 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719 for IP: 192.168.76.2
	I1207 23:36:05.359170  665837 certs.go:195] generating shared ca certs ...
	I1207 23:36:05.359194  665837 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:05.359394  665837 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:36:05.359470  665837 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:36:05.359492  665837 certs.go:257] generating profile certs ...
	I1207 23:36:05.359577  665837 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/client.key
	I1207 23:36:05.359596  665837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/client.crt with IP's: []
	I1207 23:36:05.458374  665837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/client.crt ...
	I1207 23:36:05.458420  665837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/client.crt: {Name:mk6d221c986dcccee24e29be113ea69c348eb796 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:05.458674  665837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/client.key ...
	I1207 23:36:05.458700  665837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/client.key: {Name:mk707f84edcddb5e4839ac043dccc843e61f8210 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:05.458880  665837 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.key.81fe4363
	I1207 23:36:05.458905  665837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.crt.81fe4363 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1207 23:36:05.509982  665837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.crt.81fe4363 ...
	I1207 23:36:05.510016  665837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.crt.81fe4363: {Name:mk0054ada683cb260273268c7ce81d6ead662c0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:05.510198  665837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.key.81fe4363 ...
	I1207 23:36:05.510217  665837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.key.81fe4363: {Name:mk31ff682dcae9ca96dc237204769da60d62ee7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:05.510321  665837 certs.go:382] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.crt.81fe4363 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.crt
	I1207 23:36:05.510458  665837 certs.go:386] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.key.81fe4363 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.key
	I1207 23:36:05.510543  665837 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.key
	I1207 23:36:05.510566  665837 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.crt with IP's: []
	I1207 23:36:05.640941  665837 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.crt ...
	I1207 23:36:05.640971  665837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.crt: {Name:mkd739335a5874b7c0a770d58de470172e2dbedd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:05.641129  665837 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.key ...
	I1207 23:36:05.641142  665837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.key: {Name:mke6ddf36874683d7952a65cdc8000dccae22770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:05.641319  665837 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:36:05.641387  665837 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:36:05.641396  665837 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:36:05.641421  665837 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:36:05.641447  665837 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:36:05.641470  665837 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:36:05.641517  665837 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:36:05.642150  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:36:05.661589  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:36:05.679772  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:36:05.698458  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:36:05.716258  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1207 23:36:05.733934  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 23:36:05.751905  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:36:05.770267  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 23:36:05.787797  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:36:05.807066  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:36:05.824497  665837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:36:05.842195  665837 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:36:05.856459  665837 ssh_runner.go:195] Run: openssl version
	I1207 23:36:05.862789  665837 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:36:05.870598  665837 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:36:05.878422  665837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:36:05.882313  665837 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:36:05.882379  665837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:36:05.917466  665837 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:36:05.925258  665837 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3931252.pem /etc/ssl/certs/3ec20f2e.0
	I1207 23:36:05.933502  665837 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:05.941365  665837 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:36:05.949194  665837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:05.952999  665837 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:05.953069  665837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:05.988689  665837 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:36:05.996881  665837 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1207 23:36:06.005175  665837 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:36:06.013509  665837 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:36:06.021477  665837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:36:06.025846  665837 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:36:06.025909  665837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:36:06.065650  665837 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:36:06.073822  665837 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/393125.pem /etc/ssl/certs/51391683.0
	I1207 23:36:06.081580  665837 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:36:06.085187  665837 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1207 23:36:06.085252  665837 kubeadm.go:401] StartCluster: {Name:newest-cni-858719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:36:06.085347  665837 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:36:06.085403  665837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:36:06.116569  665837 cri.go:89] found id: ""
	I1207 23:36:06.116652  665837 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:36:06.125546  665837 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 23:36:06.133602  665837 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1207 23:36:06.133654  665837 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 23:36:06.141466  665837 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 23:36:06.141488  665837 kubeadm.go:158] found existing configuration files:
	
	I1207 23:36:06.141546  665837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1207 23:36:06.149511  665837 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1207 23:36:06.149584  665837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1207 23:36:06.157230  665837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1207 23:36:06.165147  665837 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1207 23:36:06.165213  665837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1207 23:36:06.173153  665837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1207 23:36:06.181615  665837 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1207 23:36:06.181677  665837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1207 23:36:06.189582  665837 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1207 23:36:06.197529  665837 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1207 23:36:06.197592  665837 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1207 23:36:06.205204  665837 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1207 23:36:06.246058  665837 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1207 23:36:06.246147  665837 kubeadm.go:319] [preflight] Running pre-flight checks
	I1207 23:36:06.331379  665837 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1207 23:36:06.331493  665837 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1207 23:36:06.331530  665837 kubeadm.go:319] OS: Linux
	I1207 23:36:06.331600  665837 kubeadm.go:319] CGROUPS_CPU: enabled
	I1207 23:36:06.331651  665837 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1207 23:36:06.331710  665837 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1207 23:36:06.331818  665837 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1207 23:36:06.331916  665837 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1207 23:36:06.331981  665837 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1207 23:36:06.332048  665837 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1207 23:36:06.332128  665837 kubeadm.go:319] CGROUPS_IO: enabled
	I1207 23:36:06.391726  665837 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 23:36:06.391894  665837 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 23:36:06.392060  665837 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1207 23:36:06.400113  665837 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 23:36:06.402392  665837 out.go:252]   - Generating certificates and keys ...
	I1207 23:36:06.402495  665837 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1207 23:36:06.402610  665837 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1207 23:36:06.483657  665837 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 23:36:06.573106  665837 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1207 23:36:06.701431  665837 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1207 23:36:06.736106  665837 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1207 23:36:07.035667  665837 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1207 23:36:07.035880  665837 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-858719] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1207 23:36:07.083552  665837 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1207 23:36:07.083740  665837 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-858719] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1207 23:36:07.112003  665837 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 23:36:07.149906  665837 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 23:36:07.264022  665837 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1207 23:36:07.264234  665837 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 23:36:07.309844  665837 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 23:36:07.342914  665837 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 23:36:07.358133  665837 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 23:36:07.458609  665837 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 23:36:07.517290  665837 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 23:36:07.517849  665837 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 23:36:07.522893  665837 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 23:36:07.525987  665837 out.go:252]   - Booting up control plane ...
	I1207 23:36:07.526156  665837 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 23:36:07.526263  665837 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 23:36:07.526372  665837 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 23:36:07.540600  665837 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 23:36:07.540760  665837 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1207 23:36:07.549156  665837 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1207 23:36:07.549503  665837 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 23:36:07.549598  665837 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1207 23:36:03.875543  663227 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 23:36:04.187689  663227 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1207 23:36:04.382114  663227 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1207 23:36:04.713204  663227 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1207 23:36:05.332551  663227 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1207 23:36:05.332771  663227 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-312944 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1207 23:36:05.534039  663227 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1207 23:36:05.534236  663227 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-312944 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1207 23:36:06.312022  663227 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 23:36:06.545164  663227 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 23:36:06.798735  663227 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1207 23:36:06.799013  663227 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 23:36:07.135049  663227 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 23:36:07.298200  663227 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 23:36:07.371156  663227 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 23:36:07.646146  663227 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 23:36:08.354276  663227 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 23:36:08.354863  663227 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 23:36:08.361382  663227 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 23:36:08.363200  663227 out.go:252]   - Booting up control plane ...
	I1207 23:36:08.363380  663227 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 23:36:08.363524  663227 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 23:36:08.364215  663227 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 23:36:08.381319  663227 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 23:36:08.381493  663227 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1207 23:36:08.389287  663227 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1207 23:36:08.389544  663227 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 23:36:08.389619  663227 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1207 23:36:07.667091  665837 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1207 23:36:07.667277  665837 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1207 23:36:08.168140  665837 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.174754ms
	I1207 23:36:08.171235  665837 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1207 23:36:08.171381  665837 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1207 23:36:08.171543  665837 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1207 23:36:08.171624  665837 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1207 23:36:09.176277  665837 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004990684s
	I1207 23:36:10.185530  665837 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.014192221s
	I1207 23:36:12.173071  665837 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001762312s
	I1207 23:36:12.194610  665837 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 23:36:12.209368  665837 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 23:36:12.226741  665837 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 23:36:12.226977  665837 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-858719 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 23:36:12.239792  665837 kubeadm.go:319] [bootstrap-token] Using token: mq6mhg.hwg0yzc47jfu4zht
	I1207 23:36:12.241395  665837 out.go:252]   - Configuring RBAC rules ...
	I1207 23:36:12.241570  665837 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 23:36:12.247069  665837 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 23:36:12.258568  665837 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 23:36:12.261693  665837 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 23:36:12.264880  665837 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 23:36:12.268027  665837 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 23:36:12.580634  665837 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 23:36:08.520167  663227 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1207 23:36:08.520320  663227 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1207 23:36:09.520683  663227 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00095023s
	I1207 23:36:09.527427  663227 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1207 23:36:09.527588  663227 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8444/livez
	I1207 23:36:09.527732  663227 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1207 23:36:09.527840  663227 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1207 23:36:10.760845  663227 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.233408303s
	I1207 23:36:11.212897  663227 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.685473367s
	I1207 23:36:13.029739  663227 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502307789s
	I1207 23:36:13.050429  663227 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 23:36:13.061698  663227 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 23:36:13.072993  663227 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 23:36:13.073304  663227 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-312944 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 23:36:13.082949  663227 kubeadm.go:319] [bootstrap-token] Using token: tl37rs.bkc64g0q1t9zifzu
	I1207 23:36:13.084580  663227 out.go:252]   - Configuring RBAC rules ...
	I1207 23:36:13.084759  663227 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 23:36:13.088650  663227 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 23:36:13.098310  663227 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 23:36:13.101157  663227 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 23:36:13.103789  663227 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 23:36:13.106752  663227 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 23:36:13.436558  663227 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 23:36:12.999774  665837 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1207 23:36:13.581105  665837 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1207 23:36:13.582408  665837 kubeadm.go:319] 
	I1207 23:36:13.582540  665837 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1207 23:36:13.582570  665837 kubeadm.go:319] 
	I1207 23:36:13.582689  665837 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1207 23:36:13.582700  665837 kubeadm.go:319] 
	I1207 23:36:13.582736  665837 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1207 23:36:13.582820  665837 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 23:36:13.582901  665837 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 23:36:13.582913  665837 kubeadm.go:319] 
	I1207 23:36:13.583000  665837 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1207 23:36:13.583009  665837 kubeadm.go:319] 
	I1207 23:36:13.583078  665837 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 23:36:13.583087  665837 kubeadm.go:319] 
	I1207 23:36:13.583157  665837 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1207 23:36:13.583255  665837 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 23:36:13.583347  665837 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 23:36:13.583363  665837 kubeadm.go:319] 
	I1207 23:36:13.583478  665837 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 23:36:13.583590  665837 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1207 23:36:13.583611  665837 kubeadm.go:319] 
	I1207 23:36:13.583706  665837 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token mq6mhg.hwg0yzc47jfu4zht \
	I1207 23:36:13.583892  665837 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a6f9ffe32c21ad638ebba2743e15f014ccba55b6baef971adb92cbf8edf27a49 \
	I1207 23:36:13.583939  665837 kubeadm.go:319] 	--control-plane 
	I1207 23:36:13.583949  665837 kubeadm.go:319] 
	I1207 23:36:13.584081  665837 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1207 23:36:13.584096  665837 kubeadm.go:319] 
	I1207 23:36:13.584213  665837 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token mq6mhg.hwg0yzc47jfu4zht \
	I1207 23:36:13.584407  665837 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a6f9ffe32c21ad638ebba2743e15f014ccba55b6baef971adb92cbf8edf27a49 
	I1207 23:36:13.587247  665837 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1207 23:36:13.587427  665837 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 23:36:13.587467  665837 cni.go:84] Creating CNI manager for ""
	I1207 23:36:13.587481  665837 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:36:13.588936  665837 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1207 23:36:13.861597  663227 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1207 23:36:14.436921  663227 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1207 23:36:14.438117  663227 kubeadm.go:319] 
	I1207 23:36:14.438210  663227 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1207 23:36:14.438220  663227 kubeadm.go:319] 
	I1207 23:36:14.438309  663227 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1207 23:36:14.438318  663227 kubeadm.go:319] 
	I1207 23:36:14.438375  663227 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1207 23:36:14.438458  663227 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 23:36:14.438540  663227 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 23:36:14.438557  663227 kubeadm.go:319] 
	I1207 23:36:14.438629  663227 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1207 23:36:14.438640  663227 kubeadm.go:319] 
	I1207 23:36:14.438706  663227 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 23:36:14.438730  663227 kubeadm.go:319] 
	I1207 23:36:14.438822  663227 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1207 23:36:14.438917  663227 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 23:36:14.439002  663227 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 23:36:14.439012  663227 kubeadm.go:319] 
	I1207 23:36:14.439128  663227 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 23:36:14.439236  663227 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1207 23:36:14.439244  663227 kubeadm.go:319] 
	I1207 23:36:14.439382  663227 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token tl37rs.bkc64g0q1t9zifzu \
	I1207 23:36:14.439530  663227 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a6f9ffe32c21ad638ebba2743e15f014ccba55b6baef971adb92cbf8edf27a49 \
	I1207 23:36:14.439559  663227 kubeadm.go:319] 	--control-plane 
	I1207 23:36:14.439591  663227 kubeadm.go:319] 
	I1207 23:36:14.439698  663227 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1207 23:36:14.439717  663227 kubeadm.go:319] 
	I1207 23:36:14.439836  663227 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token tl37rs.bkc64g0q1t9zifzu \
	I1207 23:36:14.439963  663227 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a6f9ffe32c21ad638ebba2743e15f014ccba55b6baef971adb92cbf8edf27a49 
	I1207 23:36:14.442972  663227 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1207 23:36:14.443128  663227 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 23:36:14.443158  663227 cni.go:84] Creating CNI manager for ""
	I1207 23:36:14.443168  663227 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:36:14.445084  663227 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Dec 07 23:35:40 no-preload-313006 crio[572]: time="2025-12-07T23:35:40.514222251Z" level=info msg="Started container" PID=1768 containerID=d86d5bc68a03178e36fcf86a2aa8dfeec1d0615e47d1e74c30c06b4324fc3485 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h/dashboard-metrics-scraper id=0ae1d10b-7409-4eaa-9113-849e477d8893 name=/runtime.v1.RuntimeService/StartContainer sandboxID=311dc02799023ad26957723ef5e0353336394c2181a73c8a13fd1a721603fc89
	Dec 07 23:35:41 no-preload-313006 crio[572]: time="2025-12-07T23:35:41.55057371Z" level=info msg="Removing container: 47abf464763e71165bcdab4db1ebf65eb73a9e00bb6a4db90fb3163f12f3d1d5" id=9cc4e628-ffd8-405a-8c03-d4d0a4b02b38 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 07 23:35:41 no-preload-313006 crio[572]: time="2025-12-07T23:35:41.561829981Z" level=info msg="Removed container 47abf464763e71165bcdab4db1ebf65eb73a9e00bb6a4db90fb3163f12f3d1d5: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h/dashboard-metrics-scraper" id=9cc4e628-ffd8-405a-8c03-d4d0a4b02b38 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 07 23:35:51 no-preload-313006 crio[572]: time="2025-12-07T23:35:51.583579536Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=24e2c512-aee2-4b74-9f64-d1409730e3cc name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:35:51 no-preload-313006 crio[572]: time="2025-12-07T23:35:51.594959133Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=753dbf2c-09e2-4de1-9b38-90615e3173d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:35:51 no-preload-313006 crio[572]: time="2025-12-07T23:35:51.615897917Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=28347122-adc4-4fa9-87ec-59815e90b4b5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:35:51 no-preload-313006 crio[572]: time="2025-12-07T23:35:51.616064446Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:35:51 no-preload-313006 crio[572]: time="2025-12-07T23:35:51.651050159Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:35:51 no-preload-313006 crio[572]: time="2025-12-07T23:35:51.651258454Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/190c9a5093d18173d8622fb8634d4e121b451dd4336fd251e3a35f62a7599088/merged/etc/passwd: no such file or directory"
	Dec 07 23:35:51 no-preload-313006 crio[572]: time="2025-12-07T23:35:51.651292455Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/190c9a5093d18173d8622fb8634d4e121b451dd4336fd251e3a35f62a7599088/merged/etc/group: no such file or directory"
	Dec 07 23:35:51 no-preload-313006 crio[572]: time="2025-12-07T23:35:51.651625946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:35:51 no-preload-313006 crio[572]: time="2025-12-07T23:35:51.887827824Z" level=info msg="Created container 9d70771c342e0e6a8b340491d36ea107bf8abe93159eff71b6b33c5a89df58be: kube-system/storage-provisioner/storage-provisioner" id=28347122-adc4-4fa9-87ec-59815e90b4b5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:35:51 no-preload-313006 crio[572]: time="2025-12-07T23:35:51.889504117Z" level=info msg="Starting container: 9d70771c342e0e6a8b340491d36ea107bf8abe93159eff71b6b33c5a89df58be" id=9dfd54a3-a986-46d4-aca6-b8a774964676 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:35:51 no-preload-313006 crio[572]: time="2025-12-07T23:35:51.891895021Z" level=info msg="Started container" PID=1782 containerID=9d70771c342e0e6a8b340491d36ea107bf8abe93159eff71b6b33c5a89df58be description=kube-system/storage-provisioner/storage-provisioner id=9dfd54a3-a986-46d4-aca6-b8a774964676 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b5531e6a3e1a7af31b709666ec1989cdc6b00c6e736f884036cb80df0a77319
	Dec 07 23:36:05 no-preload-313006 crio[572]: time="2025-12-07T23:36:05.458246943Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ffb81999-ca9a-4501-a776-6edd1612a6e1 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:36:05 no-preload-313006 crio[572]: time="2025-12-07T23:36:05.459379346Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=02e66dba-019c-4d33-8cfe-c4c0cc5c484c name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:36:05 no-preload-313006 crio[572]: time="2025-12-07T23:36:05.460419568Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h/dashboard-metrics-scraper" id=0049aa38-ab80-477d-a83d-ee6497c098ab name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:36:05 no-preload-313006 crio[572]: time="2025-12-07T23:36:05.460577119Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:05 no-preload-313006 crio[572]: time="2025-12-07T23:36:05.467131248Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:05 no-preload-313006 crio[572]: time="2025-12-07T23:36:05.467770551Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:05 no-preload-313006 crio[572]: time="2025-12-07T23:36:05.492942649Z" level=info msg="Created container 956668bdbf8d201d97440dac258e060ce7444a7f759273e89cb0b00bce91fbe0: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h/dashboard-metrics-scraper" id=0049aa38-ab80-477d-a83d-ee6497c098ab name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:36:05 no-preload-313006 crio[572]: time="2025-12-07T23:36:05.493647342Z" level=info msg="Starting container: 956668bdbf8d201d97440dac258e060ce7444a7f759273e89cb0b00bce91fbe0" id=619a6d42-33f8-4903-9c2c-2aeebaa7829b name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:36:05 no-preload-313006 crio[572]: time="2025-12-07T23:36:05.495553707Z" level=info msg="Started container" PID=1818 containerID=956668bdbf8d201d97440dac258e060ce7444a7f759273e89cb0b00bce91fbe0 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h/dashboard-metrics-scraper id=619a6d42-33f8-4903-9c2c-2aeebaa7829b name=/runtime.v1.RuntimeService/StartContainer sandboxID=311dc02799023ad26957723ef5e0353336394c2181a73c8a13fd1a721603fc89
	Dec 07 23:36:05 no-preload-313006 crio[572]: time="2025-12-07T23:36:05.626197068Z" level=info msg="Removing container: d86d5bc68a03178e36fcf86a2aa8dfeec1d0615e47d1e74c30c06b4324fc3485" id=05472ab6-4248-4715-87ca-e4bf9660b2ed name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 07 23:36:05 no-preload-313006 crio[572]: time="2025-12-07T23:36:05.636917966Z" level=info msg="Removed container d86d5bc68a03178e36fcf86a2aa8dfeec1d0615e47d1e74c30c06b4324fc3485: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h/dashboard-metrics-scraper" id=05472ab6-4248-4715-87ca-e4bf9660b2ed name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	956668bdbf8d2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   311dc02799023       dashboard-metrics-scraper-867fb5f87b-7w27h   kubernetes-dashboard
	9d70771c342e0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   3b5531e6a3e1a       storage-provisioner                          kube-system
	8a4e2c23a171e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   5ad6776b7ef68       kubernetes-dashboard-b84665fb8-zvhhr         kubernetes-dashboard
	915a05bbae2c6       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   dfaacba9243d4       busybox                                      default
	63e35ea9afaae       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           54 seconds ago      Running             coredns                     0                   f228db3be2520       coredns-7d764666f9-btjrp                     kube-system
	393f33ab322db       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           54 seconds ago      Running             kube-proxy                  0                   08915dbfad33d       kube-proxy-xw4pf                             kube-system
	2c733f7f60399       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   3b5531e6a3e1a       storage-provisioner                          kube-system
	875984b763206       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   4bfdfdc332385       kindnet-nzf5r                                kube-system
	7a318b0832368       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           56 seconds ago      Running             etcd                        0                   17da0d6c592de       etcd-no-preload-313006                       kube-system
	404e1d5beb2da       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           56 seconds ago      Running             kube-controller-manager     0                   5199d5b5b27ac       kube-controller-manager-no-preload-313006    kube-system
	087d0f5345ac8       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           56 seconds ago      Running             kube-apiserver              0                   958dccc6a52f9       kube-apiserver-no-preload-313006             kube-system
	1902052b7fa9a       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           56 seconds ago      Running             kube-scheduler              0                   90bbf1eef33f8       kube-scheduler-no-preload-313006             kube-system
	
	
	==> coredns [63e35ea9afaaed7ad438f881cbcaf3b5813164e93a7f04bed7176c35907cb4c0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:56789 - 3926 "HINFO IN 1702562694029715222.3097478757243340104. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030089708s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-313006
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-313006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=no-preload-313006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_34_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:34:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-313006
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:36:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:35:51 +0000   Sun, 07 Dec 2025 23:34:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:35:51 +0000   Sun, 07 Dec 2025 23:34:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:35:51 +0000   Sun, 07 Dec 2025 23:34:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:35:51 +0000   Sun, 07 Dec 2025 23:34:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-313006
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                1b1493a2-5c01-4861-a1e5-15f85715a778
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-7d764666f9-btjrp                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-no-preload-313006                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-nzf5r                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-no-preload-313006              250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-313006     200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-xw4pf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-no-preload-313006              100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-7w27h    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-zvhhr          0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  111s  node-controller  Node no-preload-313006 event: Registered Node no-preload-313006 in Controller
	  Normal  RegisteredNode  52s   node-controller  Node no-preload-313006 event: Registered Node no-preload-313006 in Controller
	
	
	==> dmesg <==
	[  +0.006319] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.495443] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006323] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494714] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006745] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494455] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007157] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493953] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007413] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493695] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007143] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493798] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007702] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493076] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008458] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493060] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008891] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492811] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007996] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493243] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008588] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492559] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008931] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.491699] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.010378] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	
	
	==> etcd [7a318b0832368150c50b8e6bcc0b249c6c0f5e0835f526a9036a3f9d6818cc85] <==
	{"level":"warn","ts":"2025-12-07T23:35:19.685960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.692548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.698592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.705340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.712615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.719863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.726201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.732512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.738758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.746865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.755908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.763693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.770914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.777400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.783958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.803895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.810115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.817149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.823405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:35:19.867172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50816","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T23:35:51.321892Z","caller":"traceutil/trace.go:172","msg":"trace[2117102789] transaction","detail":"{read_only:false; response_revision:663; number_of_response:1; }","duration":"162.342371ms","start":"2025-12-07T23:35:51.159533Z","end":"2025-12-07T23:35:51.321875Z","steps":["trace[2117102789] 'process raft request'  (duration: 162.22862ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-07T23:35:51.718124Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.511883ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" limit:1 ","response":"range_response_count:1 size:420"}
	{"level":"info","ts":"2025-12-07T23:35:51.718372Z","caller":"traceutil/trace.go:172","msg":"trace[1813324635] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:665; }","duration":"100.774283ms","start":"2025-12-07T23:35:51.617570Z","end":"2025-12-07T23:35:51.718345Z","steps":["trace[1813324635] 'agreement among raft nodes before linearized reading'  (duration: 81.676832ms)","trace[1813324635] 'range keys from in-memory index tree'  (duration: 18.724524ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-07T23:35:51.718386Z","caller":"traceutil/trace.go:172","msg":"trace[230611098] transaction","detail":"{read_only:false; response_revision:666; number_of_response:1; }","duration":"118.138837ms","start":"2025-12-07T23:35:51.600232Z","end":"2025-12-07T23:35:51.718371Z","steps":["trace[230611098] 'process raft request'  (duration: 99.064608ms)","trace[230611098] 'compare'  (duration: 18.856204ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-07T23:35:51.718369Z","caller":"traceutil/trace.go:172","msg":"trace[1879282581] transaction","detail":"{read_only:false; response_revision:667; number_of_response:1; }","duration":"114.490433ms","start":"2025-12-07T23:35:51.603862Z","end":"2025-12-07T23:35:51.718353Z","steps":["trace[1879282581] 'process raft request'  (duration: 114.413442ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:36:15 up  2:18,  0 user,  load average: 4.10, 2.57, 1.94
	Linux no-preload-313006 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [875984b7632065686e5488eaa175d1e9bc6f11d4ab18328ac4d3c2df479df442] <==
	I1207 23:35:21.038617       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 23:35:21.038870       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1207 23:35:21.039059       1 main.go:148] setting mtu 1500 for CNI 
	I1207 23:35:21.039079       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 23:35:21.039102       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T23:35:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 23:35:21.237110       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 23:35:21.336477       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 23:35:21.336542       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 23:35:21.336934       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 23:35:21.736685       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 23:35:21.736713       1 metrics.go:72] Registering metrics
	I1207 23:35:21.736807       1 controller.go:711] "Syncing nftables rules"
	I1207 23:35:31.237644       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1207 23:35:31.237722       1 main.go:301] handling current node
	I1207 23:35:41.237531       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1207 23:35:41.237578       1 main.go:301] handling current node
	I1207 23:35:51.245490       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1207 23:35:51.245524       1 main.go:301] handling current node
	I1207 23:36:01.240462       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1207 23:36:01.240493       1 main.go:301] handling current node
	I1207 23:36:11.238403       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1207 23:36:11.238450       1 main.go:301] handling current node
	
	
	==> kube-apiserver [087d0f5345ac825bcf193ab138e126157b165b5aa86f1b652afd90640d7fda6e] <==
	I1207 23:35:20.341825       1 cache.go:39] Caches are synced for autoregister controller
	I1207 23:35:20.341934       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1207 23:35:20.342020       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:20.342287       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1207 23:35:20.342975       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1207 23:35:20.343033       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1207 23:35:20.343350       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1207 23:35:20.343360       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1207 23:35:20.343531       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:20.349905       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1207 23:35:20.354010       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1207 23:35:20.365383       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 23:35:20.373773       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:35:20.577097       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 23:35:20.627185       1 controller.go:667] quota admission added evaluator for: namespaces
	I1207 23:35:20.652761       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 23:35:20.670951       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 23:35:20.677621       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 23:35:20.714447       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.119.12"}
	I1207 23:35:20.723874       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.43.92"}
	I1207 23:35:21.245030       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1207 23:35:23.938570       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 23:35:23.938617       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 23:35:23.988529       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:35:24.040416       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [404e1d5beb2da9d3cc45722c51fc2e1c7b0c587a72d76030ae16a0117eb8350a] <==
	I1207 23:35:23.492221       1 range_allocator.go:177] "Sending events to api server"
	I1207 23:35:23.492159       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492267       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492167       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492293       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492176       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492358       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492363       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492138       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492381       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492267       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1207 23:35:23.492400       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:35:23.492406       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492115       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492169       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-313006"
	I1207 23:35:23.492671       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492736       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.492705       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1207 23:35:23.493041       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.500675       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.503076       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:35:23.592742       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:23.592763       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1207 23:35:23.592768       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1207 23:35:23.604023       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [393f33ab322dbe6524e1390a9b4b3524caaee37f8fd3322f5fa42afcba2d88c8] <==
	I1207 23:35:20.852603       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:35:20.926984       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:35:21.027150       1 shared_informer.go:377] "Caches are synced"
	I1207 23:35:21.027187       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1207 23:35:21.027265       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:35:21.047531       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:35:21.047599       1 server_linux.go:136] "Using iptables Proxier"
	I1207 23:35:21.053166       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:35:21.053618       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1207 23:35:21.053641       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:35:21.054871       1 config.go:200] "Starting service config controller"
	I1207 23:35:21.055266       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:35:21.054967       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:35:21.055300       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:35:21.055096       1 config.go:309] "Starting node config controller"
	I1207 23:35:21.055313       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:35:21.055319       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:35:21.054919       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:35:21.055343       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:35:21.156073       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 23:35:21.156094       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 23:35:21.156107       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1902052b7fa9a51b713591332e8f8f19d13383667710cc98390abfe859d91e2c] <==
	I1207 23:35:19.288474       1 serving.go:386] Generated self-signed cert in-memory
	W1207 23:35:20.272895       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 23:35:20.272951       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 23:35:20.272963       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 23:35:20.272972       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 23:35:20.297811       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1207 23:35:20.297841       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:35:20.300652       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:35:20.300730       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:35:20.300810       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 23:35:20.300949       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 23:35:20.401579       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 07 23:35:40 no-preload-313006 kubelet[724]: E1207 23:35:40.544648     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h" containerName="dashboard-metrics-scraper"
	Dec 07 23:35:40 no-preload-313006 kubelet[724]: I1207 23:35:40.544745     724 scope.go:122] "RemoveContainer" containerID="d86d5bc68a03178e36fcf86a2aa8dfeec1d0615e47d1e74c30c06b4324fc3485"
	Dec 07 23:35:40 no-preload-313006 kubelet[724]: E1207 23:35:40.544958     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7w27h_kubernetes-dashboard(d8ba85c8-2a4f-4a46-813e-d9ce71c0e7cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h" podUID="d8ba85c8-2a4f-4a46-813e-d9ce71c0e7cc"
	Dec 07 23:35:41 no-preload-313006 kubelet[724]: I1207 23:35:41.548860     724 scope.go:122] "RemoveContainer" containerID="47abf464763e71165bcdab4db1ebf65eb73a9e00bb6a4db90fb3163f12f3d1d5"
	Dec 07 23:35:41 no-preload-313006 kubelet[724]: E1207 23:35:41.549194     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h" containerName="dashboard-metrics-scraper"
	Dec 07 23:35:41 no-preload-313006 kubelet[724]: I1207 23:35:41.549216     724 scope.go:122] "RemoveContainer" containerID="d86d5bc68a03178e36fcf86a2aa8dfeec1d0615e47d1e74c30c06b4324fc3485"
	Dec 07 23:35:41 no-preload-313006 kubelet[724]: E1207 23:35:41.549425     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7w27h_kubernetes-dashboard(d8ba85c8-2a4f-4a46-813e-d9ce71c0e7cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h" podUID="d8ba85c8-2a4f-4a46-813e-d9ce71c0e7cc"
	Dec 07 23:35:48 no-preload-313006 kubelet[724]: E1207 23:35:48.682946     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h" containerName="dashboard-metrics-scraper"
	Dec 07 23:35:48 no-preload-313006 kubelet[724]: I1207 23:35:48.682991     724 scope.go:122] "RemoveContainer" containerID="d86d5bc68a03178e36fcf86a2aa8dfeec1d0615e47d1e74c30c06b4324fc3485"
	Dec 07 23:35:48 no-preload-313006 kubelet[724]: E1207 23:35:48.683181     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7w27h_kubernetes-dashboard(d8ba85c8-2a4f-4a46-813e-d9ce71c0e7cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h" podUID="d8ba85c8-2a4f-4a46-813e-d9ce71c0e7cc"
	Dec 07 23:35:51 no-preload-313006 kubelet[724]: I1207 23:35:51.582866     724 scope.go:122] "RemoveContainer" containerID="2c733f7f60399147a390c6e21cbb293e3dd549fd6dc613363b85209ca503d959"
	Dec 07 23:35:56 no-preload-313006 kubelet[724]: E1207 23:35:56.716661     724 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-btjrp" containerName="coredns"
	Dec 07 23:36:05 no-preload-313006 kubelet[724]: E1207 23:36:05.457541     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h" containerName="dashboard-metrics-scraper"
	Dec 07 23:36:05 no-preload-313006 kubelet[724]: I1207 23:36:05.457596     724 scope.go:122] "RemoveContainer" containerID="d86d5bc68a03178e36fcf86a2aa8dfeec1d0615e47d1e74c30c06b4324fc3485"
	Dec 07 23:36:05 no-preload-313006 kubelet[724]: I1207 23:36:05.624862     724 scope.go:122] "RemoveContainer" containerID="d86d5bc68a03178e36fcf86a2aa8dfeec1d0615e47d1e74c30c06b4324fc3485"
	Dec 07 23:36:05 no-preload-313006 kubelet[724]: E1207 23:36:05.625105     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h" containerName="dashboard-metrics-scraper"
	Dec 07 23:36:05 no-preload-313006 kubelet[724]: I1207 23:36:05.625140     724 scope.go:122] "RemoveContainer" containerID="956668bdbf8d201d97440dac258e060ce7444a7f759273e89cb0b00bce91fbe0"
	Dec 07 23:36:05 no-preload-313006 kubelet[724]: E1207 23:36:05.625360     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7w27h_kubernetes-dashboard(d8ba85c8-2a4f-4a46-813e-d9ce71c0e7cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h" podUID="d8ba85c8-2a4f-4a46-813e-d9ce71c0e7cc"
	Dec 07 23:36:08 no-preload-313006 kubelet[724]: E1207 23:36:08.683284     724 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h" containerName="dashboard-metrics-scraper"
	Dec 07 23:36:08 no-preload-313006 kubelet[724]: I1207 23:36:08.683353     724 scope.go:122] "RemoveContainer" containerID="956668bdbf8d201d97440dac258e060ce7444a7f759273e89cb0b00bce91fbe0"
	Dec 07 23:36:08 no-preload-313006 kubelet[724]: E1207 23:36:08.683570     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7w27h_kubernetes-dashboard(d8ba85c8-2a4f-4a46-813e-d9ce71c0e7cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7w27h" podUID="d8ba85c8-2a4f-4a46-813e-d9ce71c0e7cc"
	Dec 07 23:36:10 no-preload-313006 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 07 23:36:10 no-preload-313006 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 07 23:36:10 no-preload-313006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 07 23:36:10 no-preload-313006 systemd[1]: kubelet.service: Consumed 1.771s CPU time.
	
	
	==> kubernetes-dashboard [8a4e2c23a171e4e01d7e5be0846972a8e83d5db6e5feebf9d7658400cf5cf62e] <==
	2025/12/07 23:35:30 Using namespace: kubernetes-dashboard
	2025/12/07 23:35:30 Using in-cluster config to connect to apiserver
	2025/12/07 23:35:30 Using secret token for csrf signing
	2025/12/07 23:35:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/07 23:35:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/07 23:35:30 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/07 23:35:30 Generating JWE encryption key
	2025/12/07 23:35:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/07 23:35:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/07 23:35:30 Initializing JWE encryption key from synchronized object
	2025/12/07 23:35:30 Creating in-cluster Sidecar client
	2025/12/07 23:35:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/07 23:35:30 Serving insecurely on HTTP port: 9090
	2025/12/07 23:36:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/07 23:35:30 Starting overwatch
	
	
	==> storage-provisioner [2c733f7f60399147a390c6e21cbb293e3dd549fd6dc613363b85209ca503d959] <==
	I1207 23:35:20.815603       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1207 23:35:50.819165       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9d70771c342e0e6a8b340491d36ea107bf8abe93159eff71b6b33c5a89df58be] <==
	I1207 23:35:52.945515       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 23:35:52.953253       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 23:35:52.953306       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1207 23:35:52.955567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:35:56.411199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:00.671544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:04.269937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:07.323685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:10.347360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:10.353238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 23:36:10.353441       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 23:36:10.353652       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-313006_3393b33c-f65f-4aee-ba6a-fdc018c105b9!
	I1207 23:36:10.354846       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"27117f0f-4148-42d8-a5da-bf1f690374b0", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-313006_3393b33c-f65f-4aee-ba6a-fdc018c105b9 became leader
	W1207 23:36:10.360948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:10.370389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 23:36:10.454267       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-313006_3393b33c-f65f-4aee-ba6a-fdc018c105b9!
	W1207 23:36:12.374169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:12.378775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:14.382641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:14.392061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-313006 -n no-preload-313006
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-313006 -n no-preload-313006: exit status 2 (339.302614ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-313006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-858719 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-858719 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (325.305982ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:36:19Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-858719 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-858719
helpers_test.go:243: (dbg) docker inspect newest-cni-858719:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a277f941d9190e49275105cd1f19ecb686250ba6117a13149b83ad3f828022d0",
	        "Created": "2025-12-07T23:36:01.669904707Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 667557,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:36:01.70721535Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/a277f941d9190e49275105cd1f19ecb686250ba6117a13149b83ad3f828022d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a277f941d9190e49275105cd1f19ecb686250ba6117a13149b83ad3f828022d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/a277f941d9190e49275105cd1f19ecb686250ba6117a13149b83ad3f828022d0/hosts",
	        "LogPath": "/var/lib/docker/containers/a277f941d9190e49275105cd1f19ecb686250ba6117a13149b83ad3f828022d0/a277f941d9190e49275105cd1f19ecb686250ba6117a13149b83ad3f828022d0-json.log",
	        "Name": "/newest-cni-858719",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-858719:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-858719",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a277f941d9190e49275105cd1f19ecb686250ba6117a13149b83ad3f828022d0",
	                "LowerDir": "/var/lib/docker/overlay2/c1a2963994212dfc7e08a1440d19707a2cf4a7d92846359bfe33ec782362bc68-init/diff:/var/lib/docker/overlay2/d2e9c5481c0f5ed3745e4b3c85b207e8e3f273f5a1d285f7bc7bfa20976ad16e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c1a2963994212dfc7e08a1440d19707a2cf4a7d92846359bfe33ec782362bc68/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c1a2963994212dfc7e08a1440d19707a2cf4a7d92846359bfe33ec782362bc68/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c1a2963994212dfc7e08a1440d19707a2cf4a7d92846359bfe33ec782362bc68/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-858719",
	                "Source": "/var/lib/docker/volumes/newest-cni-858719/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-858719",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-858719",
	                "name.minikube.sigs.k8s.io": "newest-cni-858719",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f3c5ef6f6aee5fd374533c9d7844f3f5417ce1936bd1eb824b48dc8e1d9fb9c7",
	            "SandboxKey": "/var/run/docker/netns/f3c5ef6f6aee",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-858719": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "688a12ba5396bc3f2e98a59b391778bc7eb9ccbb9500e4ff61c9584eece383c6",
	                    "EndpointID": "a9222a374bf5258629f92e1f4e71cbe3821db39660f8070a2be982f897ad033b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "e2:a6:98:04:ea:96",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-858719",
	                        "a277f941d919"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-858719 -n newest-cni-858719
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-858719 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-858719 logs -n 25: (1.163196341s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p embed-certs-654118 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-313006 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │                     │
	│ stop    │ -p no-preload-313006 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:34 UTC │ 07 Dec 25 23:35 UTC │
	│ addons  │ enable dashboard -p no-preload-313006 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p no-preload-313006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ image   │ old-k8s-version-320477 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ pause   │ -p old-k8s-version-320477 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ delete  │ -p old-k8s-version-320477                                                                                                                                                                                                                            │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p kubernetes-upgrade-703538 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-703538    │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ start   │ -p kubernetes-upgrade-703538 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-703538    │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ delete  │ -p old-k8s-version-320477                                                                                                                                                                                                                            │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ delete  │ -p disable-driver-mounts-837628                                                                                                                                                                                                                      │ disable-driver-mounts-837628 │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p default-k8s-diff-port-312944 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-312944 │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-703538                                                                                                                                                                                                                         │ kubernetes-upgrade-703538    │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p newest-cni-858719 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-654118 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ stop    │ -p embed-certs-654118 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ image   │ no-preload-313006 image list --format=json                                                                                                                                                                                                           │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ pause   │ -p no-preload-313006 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	│ delete  │ -p no-preload-313006                                                                                                                                                                                                                                 │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-858719 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-654118 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ delete  │ -p no-preload-313006                                                                                                                                                                                                                                 │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ start   │ -p embed-certs-654118 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	│ start   │ -p auto-600852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:36:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:36:20.000852  673565 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:36:20.001198  673565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:36:20.001212  673565 out.go:374] Setting ErrFile to fd 2...
	I1207 23:36:20.001219  673565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:36:20.001551  673565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:36:20.002205  673565 out.go:368] Setting JSON to false
	I1207 23:36:20.003741  673565 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8324,"bootTime":1765142256,"procs":288,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:36:20.003825  673565 start.go:143] virtualization: kvm guest
	I1207 23:36:20.005799  673565 out.go:179] * [auto-600852] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:36:20.007303  673565 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:36:20.007396  673565 notify.go:221] Checking for updates...
	I1207 23:36:20.012362  673565 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:36:20.013757  673565 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:36:20.015035  673565 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:36:20.016233  673565 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:36:20.017462  673565 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:36:19.978001  673247 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:36:19.978087  673247 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1207 23:36:19.978128  673247 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:36:19.978155  673247 cache.go:65] Caching tarball of preloaded images
	I1207 23:36:19.978537  673247 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:36:19.978695  673247 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1207 23:36:19.978857  673247 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/embed-certs-654118/config.json ...
	I1207 23:36:20.010682  673247 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:36:20.010703  673247 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:36:20.010731  673247 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:36:20.010767  673247 start.go:360] acquireMachinesLock for embed-certs-654118: {Name:mk7c4d25ea4936301d1a96de829bb052643e31a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:36:20.010836  673247 start.go:364] duration metric: took 45.012µs to acquireMachinesLock for "embed-certs-654118"
	I1207 23:36:20.010856  673247 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:36:20.010862  673247 fix.go:54] fixHost starting: 
	I1207 23:36:20.011161  673247 cli_runner.go:164] Run: docker container inspect embed-certs-654118 --format={{.State.Status}}
	I1207 23:36:20.036288  673247 fix.go:112] recreateIfNeeded on embed-certs-654118: state=Stopped err=<nil>
	W1207 23:36:20.036353  673247 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.795201348Z" level=info msg="Running pod sandbox: kube-system/kindnet-5zzk9/POD" id=2ed9e954-c1d4-459b-8b3b-a82b90c04e2f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.795265481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.795610269Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.796572324Z" level=info msg="Ran pod sandbox a08c47ceb46cba55337065092ba2d81e3bfd0e377f868e419f15dcb546955ed4 with infra container: kube-system/kube-proxy-p8v8n/POD" id=02a7929c-8304-4136-be64-74ad3d28a95f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.79802412Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=305f7f1d-7369-4c35-8aaf-4cb687258716 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.798104657Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=2ed9e954-c1d4-459b-8b3b-a82b90c04e2f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.799120004Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=6cfceecc-e9ec-41df-b195-16cd2b40e785 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.799876026Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.800697769Z" level=info msg="Ran pod sandbox 0fe840a988c90238d43b20ed0193e5d4b251a494625cb280a7606a34d6cb07f7 with infra container: kube-system/kindnet-5zzk9/POD" id=2ed9e954-c1d4-459b-8b3b-a82b90c04e2f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.801928226Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=8aaff1fe-d305-4481-b120-26a4ed89fa07 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.80448121Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=7ee77245-90e9-432a-b6cb-65f7513332df name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.805972553Z" level=info msg="Creating container: kube-system/kube-proxy-p8v8n/kube-proxy" id=13ff5947-3e53-486f-aee7-f70d952e669c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.806108264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.811117644Z" level=info msg="Creating container: kube-system/kindnet-5zzk9/kindnet-cni" id=35b083a5-286b-40fb-b7ab-cc383d0975ce name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.811225734Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.812108835Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.812767692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.815600097Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.816156292Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.852261513Z" level=info msg="Created container da862aec3df333fedd021c671c0147587fea76a4bd42c00e762d44cf945a4ef5: kube-system/kindnet-5zzk9/kindnet-cni" id=35b083a5-286b-40fb-b7ab-cc383d0975ce name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.853936544Z" level=info msg="Starting container: da862aec3df333fedd021c671c0147587fea76a4bd42c00e762d44cf945a4ef5" id=a40e2153-3ddc-4286-9dee-482d9ee3dcc7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.856629545Z" level=info msg="Created container 8af71238c2f294d770fec1bf64b43673af964ff49005061c879b4b505fbd53d2: kube-system/kube-proxy-p8v8n/kube-proxy" id=13ff5947-3e53-486f-aee7-f70d952e669c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.858237398Z" level=info msg="Started container" PID=1573 containerID=da862aec3df333fedd021c671c0147587fea76a4bd42c00e762d44cf945a4ef5 description=kube-system/kindnet-5zzk9/kindnet-cni id=a40e2153-3ddc-4286-9dee-482d9ee3dcc7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0fe840a988c90238d43b20ed0193e5d4b251a494625cb280a7606a34d6cb07f7
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.858419444Z" level=info msg="Starting container: 8af71238c2f294d770fec1bf64b43673af964ff49005061c879b4b505fbd53d2" id=58c464a2-057b-4726-90a8-2b23280baec9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:36:18 newest-cni-858719 crio[771]: time="2025-12-07T23:36:18.864060607Z" level=info msg="Started container" PID=1574 containerID=8af71238c2f294d770fec1bf64b43673af964ff49005061c879b4b505fbd53d2 description=kube-system/kube-proxy-p8v8n/kube-proxy id=58c464a2-057b-4726-90a8-2b23280baec9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a08c47ceb46cba55337065092ba2d81e3bfd0e377f868e419f15dcb546955ed4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	da862aec3df33       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   0fe840a988c90       kindnet-5zzk9                               kube-system
	8af71238c2f29       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   1 second ago        Running             kube-proxy                0                   a08c47ceb46cb       kube-proxy-p8v8n                            kube-system
	6fe7ccbbfa15d       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   12 seconds ago      Running             kube-scheduler            0                   3874ddc31ca61       kube-scheduler-newest-cni-858719            kube-system
	36298885e7a5a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   12 seconds ago      Running             etcd                      0                   0539067d54cab       etcd-newest-cni-858719                      kube-system
	b6992e558e210       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   12 seconds ago      Running             kube-controller-manager   0                   dd3a4717e07ce       kube-controller-manager-newest-cni-858719   kube-system
	c939dab275435       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   12 seconds ago      Running             kube-apiserver            0                   b5bb295f97d6f       kube-apiserver-newest-cni-858719            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-858719
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-858719
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=newest-cni-858719
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_36_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:36:10 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-858719
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:36:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:36:12 +0000   Sun, 07 Dec 2025 23:36:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:36:12 +0000   Sun, 07 Dec 2025 23:36:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:36:12 +0000   Sun, 07 Dec 2025 23:36:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 07 Dec 2025 23:36:12 +0000   Sun, 07 Dec 2025 23:36:08 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-858719
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                2fe19260-c79d-4da0-b8eb-1e49571b8323
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-858719                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-5zzk9                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-858719             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-858719    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-p8v8n                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-858719             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-858719 event: Registered Node newest-cni-858719 in Controller
	
	
	==> dmesg <==
	[  +0.006319] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.495443] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006323] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494714] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006745] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494455] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007157] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493953] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007413] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493695] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007143] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493798] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007702] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493076] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008458] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493060] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008891] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492811] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007996] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493243] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008588] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492559] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008931] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.491699] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.010378] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	
	
	==> etcd [36298885e7a5a5f3ab248ae106183688731375ca57832ac2827aa95a32b13ba7] <==
	{"level":"warn","ts":"2025-12-07T23:36:09.298378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.305272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.313561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.320262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.326921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.333678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.340537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.347136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.354530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.361381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.369503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.376010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.382473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.388806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.395555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.409089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.415598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.422163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.428544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.435207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.453083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.459639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.466894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.473320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:09.530251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45364","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:36:20 up  2:18,  0 user,  load average: 4.97, 2.77, 2.01
	Linux newest-cni-858719 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [da862aec3df333fedd021c671c0147587fea76a4bd42c00e762d44cf945a4ef5] <==
	I1207 23:36:19.124884       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 23:36:19.125200       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1207 23:36:19.125448       1 main.go:148] setting mtu 1500 for CNI 
	I1207 23:36:19.125470       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 23:36:19.125502       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T23:36:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 23:36:19.357313       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 23:36:19.423603       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 23:36:19.423682       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 23:36:19.429660       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [c939dab275435c571843f45171c679c6dea8920250fef8569a0101cdf9e927ad] <==
	I1207 23:36:10.248001       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1207 23:36:10.255652       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:10.255757       1 policy_source.go:248] refreshing policies
	I1207 23:36:10.278566       1 controller.go:667] quota admission added evaluator for: namespaces
	I1207 23:36:10.278761       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:36:10.280894       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1207 23:36:10.290544       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:36:10.432586       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 23:36:11.075941       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1207 23:36:11.084245       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1207 23:36:11.084272       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1207 23:36:11.688714       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 23:36:11.728001       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 23:36:11.782234       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1207 23:36:11.788872       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1207 23:36:11.790035       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 23:36:11.794382       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:36:12.140023       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 23:36:12.987571       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 23:36:12.998809       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1207 23:36:13.007694       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 23:36:17.693566       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:36:17.697733       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:36:17.995884       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 23:36:18.155281       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [b6992e558e21035cd78bd07fbeabf9e3d42cae32aeaebba5e1b6340c3dc3ec6c] <==
	I1207 23:36:16.947585       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:16.947672       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:16.947700       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:16.948085       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:16.948359       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:16.948367       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:16.948937       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:16.949503       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:16.950057       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:16.950098       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:16.950230       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:16.950310       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:16.950576       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:16.950768       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:16.951490       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:16.951623       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:16.954441       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:16.954449       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:16.954435       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:16.957803       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-858719" podCIDRs=["10.42.0.0/24"]
	I1207 23:36:16.962817       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:36:17.050104       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:17.050128       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1207 23:36:17.050133       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1207 23:36:17.063673       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [8af71238c2f294d770fec1bf64b43673af964ff49005061c879b4b505fbd53d2] <==
	I1207 23:36:18.933161       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:36:19.032696       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:36:19.133893       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:19.133933       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1207 23:36:19.134065       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:36:19.158599       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:36:19.158664       1 server_linux.go:136] "Using iptables Proxier"
	I1207 23:36:19.165209       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:36:19.165705       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1207 23:36:19.165732       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:36:19.167447       1 config.go:200] "Starting service config controller"
	I1207 23:36:19.167460       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:36:19.167472       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:36:19.167478       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:36:19.167509       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:36:19.167515       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:36:19.167948       1 config.go:309] "Starting node config controller"
	I1207 23:36:19.167997       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:36:19.168026       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:36:19.268563       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 23:36:19.268612       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 23:36:19.268724       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6fe7ccbbfa15d4a60308b8b41dd2a90c48d86ca850caf36818b42e10bec7ddc8] <==
	E1207 23:36:11.043830       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1207 23:36:11.045258       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1207 23:36:11.062934       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1207 23:36:11.062984       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1207 23:36:11.064280       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1207 23:36:11.064489       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1207 23:36:11.071743       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1207 23:36:11.072942       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1207 23:36:11.114899       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1207 23:36:11.115912       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1207 23:36:11.259801       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1207 23:36:11.261521       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1207 23:36:11.266886       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope"
	E1207 23:36:11.268293       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1207 23:36:11.340050       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1207 23:36:11.341586       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1207 23:36:11.372752       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope"
	E1207 23:36:11.373881       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1207 23:36:11.425517       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1207 23:36:11.426657       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1207 23:36:11.431911       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1207 23:36:11.433199       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1207 23:36:11.466465       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1207 23:36:11.467616       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1207 23:36:14.381581       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 07 23:36:13 newest-cni-858719 kubelet[1289]: E1207 23:36:13.904642    1289 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-858719" containerName="kube-apiserver"
	Dec 07 23:36:13 newest-cni-858719 kubelet[1289]: I1207 23:36:13.932310    1289 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-858719" podStartSLOduration=2.932287246 podStartE2EDuration="2.932287246s" podCreationTimestamp="2025-12-07 23:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:36:13.919080491 +0000 UTC m=+1.161912378" watchObservedRunningTime="2025-12-07 23:36:13.932287246 +0000 UTC m=+1.175119130"
	Dec 07 23:36:13 newest-cni-858719 kubelet[1289]: I1207 23:36:13.932839    1289 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-858719" podStartSLOduration=1.9328202559999998 podStartE2EDuration="1.932820256s" podCreationTimestamp="2025-12-07 23:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:36:13.931961212 +0000 UTC m=+1.174793101" watchObservedRunningTime="2025-12-07 23:36:13.932820256 +0000 UTC m=+1.175652144"
	Dec 07 23:36:13 newest-cni-858719 kubelet[1289]: I1207 23:36:13.958380    1289 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-858719" podStartSLOduration=1.958359164 podStartE2EDuration="1.958359164s" podCreationTimestamp="2025-12-07 23:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:36:13.944249706 +0000 UTC m=+1.187081589" watchObservedRunningTime="2025-12-07 23:36:13.958359164 +0000 UTC m=+1.201191051"
	Dec 07 23:36:14 newest-cni-858719 kubelet[1289]: E1207 23:36:14.887507    1289 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-858719" containerName="kube-apiserver"
	Dec 07 23:36:14 newest-cni-858719 kubelet[1289]: E1207 23:36:14.887698    1289 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-858719" containerName="kube-scheduler"
	Dec 07 23:36:14 newest-cni-858719 kubelet[1289]: E1207 23:36:14.887920    1289 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-858719" containerName="etcd"
	Dec 07 23:36:14 newest-cni-858719 kubelet[1289]: E1207 23:36:14.888206    1289 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-858719" containerName="kube-controller-manager"
	Dec 07 23:36:14 newest-cni-858719 kubelet[1289]: I1207 23:36:14.904380    1289 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-858719" podStartSLOduration=2.904357868 podStartE2EDuration="2.904357868s" podCreationTimestamp="2025-12-07 23:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:36:13.958594156 +0000 UTC m=+1.201426044" watchObservedRunningTime="2025-12-07 23:36:14.904357868 +0000 UTC m=+2.147189756"
	Dec 07 23:36:15 newest-cni-858719 kubelet[1289]: E1207 23:36:15.889158    1289 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-858719" containerName="kube-scheduler"
	Dec 07 23:36:16 newest-cni-858719 kubelet[1289]: E1207 23:36:16.261625    1289 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-858719" containerName="kube-controller-manager"
	Dec 07 23:36:16 newest-cni-858719 kubelet[1289]: E1207 23:36:16.893391    1289 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-858719" containerName="kube-scheduler"
	Dec 07 23:36:16 newest-cni-858719 kubelet[1289]: I1207 23:36:16.996561    1289 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 07 23:36:16 newest-cni-858719 kubelet[1289]: I1207 23:36:16.997260    1289 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 07 23:36:17 newest-cni-858719 kubelet[1289]: E1207 23:36:17.948106    1289 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-858719" containerName="etcd"
	Dec 07 23:36:18 newest-cni-858719 kubelet[1289]: I1207 23:36:18.380064    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/494a11f1-086c-43f3-92e7-4b59d073c5f9-kube-proxy\") pod \"kube-proxy-p8v8n\" (UID: \"494a11f1-086c-43f3-92e7-4b59d073c5f9\") " pod="kube-system/kube-proxy-p8v8n"
	Dec 07 23:36:18 newest-cni-858719 kubelet[1289]: I1207 23:36:18.380124    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/494a11f1-086c-43f3-92e7-4b59d073c5f9-xtables-lock\") pod \"kube-proxy-p8v8n\" (UID: \"494a11f1-086c-43f3-92e7-4b59d073c5f9\") " pod="kube-system/kube-proxy-p8v8n"
	Dec 07 23:36:18 newest-cni-858719 kubelet[1289]: I1207 23:36:18.380159    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b8e05261-d743-488e-9543-b60973ff09b4-cni-cfg\") pod \"kindnet-5zzk9\" (UID: \"b8e05261-d743-488e-9543-b60973ff09b4\") " pod="kube-system/kindnet-5zzk9"
	Dec 07 23:36:18 newest-cni-858719 kubelet[1289]: I1207 23:36:18.380198    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8e05261-d743-488e-9543-b60973ff09b4-xtables-lock\") pod \"kindnet-5zzk9\" (UID: \"b8e05261-d743-488e-9543-b60973ff09b4\") " pod="kube-system/kindnet-5zzk9"
	Dec 07 23:36:18 newest-cni-858719 kubelet[1289]: I1207 23:36:18.380222    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8e05261-d743-488e-9543-b60973ff09b4-lib-modules\") pod \"kindnet-5zzk9\" (UID: \"b8e05261-d743-488e-9543-b60973ff09b4\") " pod="kube-system/kindnet-5zzk9"
	Dec 07 23:36:18 newest-cni-858719 kubelet[1289]: I1207 23:36:18.380245    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vg9t\" (UniqueName: \"kubernetes.io/projected/b8e05261-d743-488e-9543-b60973ff09b4-kube-api-access-7vg9t\") pod \"kindnet-5zzk9\" (UID: \"b8e05261-d743-488e-9543-b60973ff09b4\") " pod="kube-system/kindnet-5zzk9"
	Dec 07 23:36:18 newest-cni-858719 kubelet[1289]: I1207 23:36:18.380273    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h46s\" (UniqueName: \"kubernetes.io/projected/494a11f1-086c-43f3-92e7-4b59d073c5f9-kube-api-access-6h46s\") pod \"kube-proxy-p8v8n\" (UID: \"494a11f1-086c-43f3-92e7-4b59d073c5f9\") " pod="kube-system/kube-proxy-p8v8n"
	Dec 07 23:36:18 newest-cni-858719 kubelet[1289]: I1207 23:36:18.380299    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/494a11f1-086c-43f3-92e7-4b59d073c5f9-lib-modules\") pod \"kube-proxy-p8v8n\" (UID: \"494a11f1-086c-43f3-92e7-4b59d073c5f9\") " pod="kube-system/kube-proxy-p8v8n"
	Dec 07 23:36:18 newest-cni-858719 kubelet[1289]: I1207 23:36:18.924494    1289 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-5zzk9" podStartSLOduration=0.92445741 podStartE2EDuration="924.45741ms" podCreationTimestamp="2025-12-07 23:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:36:18.923181764 +0000 UTC m=+6.166013664" watchObservedRunningTime="2025-12-07 23:36:18.92445741 +0000 UTC m=+6.167289298"
	Dec 07 23:36:18 newest-cni-858719 kubelet[1289]: I1207 23:36:18.937695    1289 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-p8v8n" podStartSLOduration=0.937677511 podStartE2EDuration="937.677511ms" podCreationTimestamp="2025-12-07 23:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:36:18.937433304 +0000 UTC m=+6.180265197" watchObservedRunningTime="2025-12-07 23:36:18.937677511 +0000 UTC m=+6.180509398"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-858719 -n newest-cni-858719
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-858719 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-dp6qz storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-858719 describe pod coredns-7d764666f9-dp6qz storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-858719 describe pod coredns-7d764666f9-dp6qz storage-provisioner: exit status 1 (66.843651ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-dp6qz" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-858719 describe pod coredns-7d764666f9-dp6qz storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-858719 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-858719 --alsologtostderr -v=1: exit status 80 (2.433380307s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-858719 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:36:42.923741  681318 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:36:42.923994  681318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:36:42.924002  681318 out.go:374] Setting ErrFile to fd 2...
	I1207 23:36:42.924006  681318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:36:42.924211  681318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:36:42.924473  681318 out.go:368] Setting JSON to false
	I1207 23:36:42.924492  681318 mustload.go:66] Loading cluster: newest-cni-858719
	I1207 23:36:42.924886  681318 config.go:182] Loaded profile config "newest-cni-858719": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:36:42.925369  681318 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:42.945173  681318 host.go:66] Checking if "newest-cni-858719" exists ...
	I1207 23:36:42.945563  681318 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:36:43.019022  681318 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-07 23:36:43.007383373 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:36:43.019673  681318 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-858719 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1207 23:36:43.021628  681318 out.go:179] * Pausing node newest-cni-858719 ... 
	I1207 23:36:43.022701  681318 host.go:66] Checking if "newest-cni-858719" exists ...
	I1207 23:36:43.022975  681318 ssh_runner.go:195] Run: systemctl --version
	I1207 23:36:43.023021  681318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:43.042663  681318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:43.142640  681318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:36:43.155383  681318 pause.go:52] kubelet running: true
	I1207 23:36:43.155449  681318 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1207 23:36:43.321708  681318 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1207 23:36:43.321909  681318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1207 23:36:43.400067  681318 cri.go:89] found id: "cc1cd9bf7531e730eee0e48829fb2f2262509a9acb9a58a449d07c2908258bae"
	I1207 23:36:43.400096  681318 cri.go:89] found id: "b16beb4e4b195daeeefa06631cdab33892ab5de00e1eaa4f3d42a32591fc4c36"
	I1207 23:36:43.400103  681318 cri.go:89] found id: "1fde05929ea13b803231bae6fb303618dc3a2b54347fde44f9fc6cbc20d0c478"
	I1207 23:36:43.400108  681318 cri.go:89] found id: "09b2ae0a7c5b9e30441c564fc12ee45fca2591d70a3b0c4f829362d1f7b1c11c"
	I1207 23:36:43.400136  681318 cri.go:89] found id: "20259f47f9c60903d1615e570f4a362857f9df6b8c1ceeeb7dae4a4a6bddec57"
	I1207 23:36:43.400142  681318 cri.go:89] found id: "60889310640bb67836703a1f3f74d931394169d4bb63a245566fc54bf5762844"
	I1207 23:36:43.400146  681318 cri.go:89] found id: ""
	I1207 23:36:43.400203  681318 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 23:36:43.417932  681318 retry.go:31] will retry after 227.571382ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:36:43Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:36:43.646501  681318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:36:43.660478  681318 pause.go:52] kubelet running: false
	I1207 23:36:43.660546  681318 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1207 23:36:43.784841  681318 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1207 23:36:43.784918  681318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1207 23:36:43.874927  681318 cri.go:89] found id: "cc1cd9bf7531e730eee0e48829fb2f2262509a9acb9a58a449d07c2908258bae"
	I1207 23:36:43.874951  681318 cri.go:89] found id: "b16beb4e4b195daeeefa06631cdab33892ab5de00e1eaa4f3d42a32591fc4c36"
	I1207 23:36:43.874957  681318 cri.go:89] found id: "1fde05929ea13b803231bae6fb303618dc3a2b54347fde44f9fc6cbc20d0c478"
	I1207 23:36:43.874962  681318 cri.go:89] found id: "09b2ae0a7c5b9e30441c564fc12ee45fca2591d70a3b0c4f829362d1f7b1c11c"
	I1207 23:36:43.874966  681318 cri.go:89] found id: "20259f47f9c60903d1615e570f4a362857f9df6b8c1ceeeb7dae4a4a6bddec57"
	I1207 23:36:43.874971  681318 cri.go:89] found id: "60889310640bb67836703a1f3f74d931394169d4bb63a245566fc54bf5762844"
	I1207 23:36:43.874975  681318 cri.go:89] found id: ""
	I1207 23:36:43.875020  681318 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 23:36:43.887798  681318 retry.go:31] will retry after 489.099996ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:36:43Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:36:44.377474  681318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:36:44.394690  681318 pause.go:52] kubelet running: false
	I1207 23:36:44.394755  681318 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1207 23:36:44.564476  681318 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1207 23:36:44.564604  681318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1207 23:36:44.655115  681318 cri.go:89] found id: "cc1cd9bf7531e730eee0e48829fb2f2262509a9acb9a58a449d07c2908258bae"
	I1207 23:36:44.655147  681318 cri.go:89] found id: "b16beb4e4b195daeeefa06631cdab33892ab5de00e1eaa4f3d42a32591fc4c36"
	I1207 23:36:44.655153  681318 cri.go:89] found id: "1fde05929ea13b803231bae6fb303618dc3a2b54347fde44f9fc6cbc20d0c478"
	I1207 23:36:44.655158  681318 cri.go:89] found id: "09b2ae0a7c5b9e30441c564fc12ee45fca2591d70a3b0c4f829362d1f7b1c11c"
	I1207 23:36:44.655163  681318 cri.go:89] found id: "20259f47f9c60903d1615e570f4a362857f9df6b8c1ceeeb7dae4a4a6bddec57"
	I1207 23:36:44.655168  681318 cri.go:89] found id: "60889310640bb67836703a1f3f74d931394169d4bb63a245566fc54bf5762844"
	I1207 23:36:44.655188  681318 cri.go:89] found id: ""
	I1207 23:36:44.655267  681318 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 23:36:44.673434  681318 retry.go:31] will retry after 346.992326ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:36:44Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:36:45.021078  681318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:36:45.039036  681318 pause.go:52] kubelet running: false
	I1207 23:36:45.039103  681318 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1207 23:36:45.183491  681318 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1207 23:36:45.183580  681318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1207 23:36:45.260282  681318 cri.go:89] found id: "cc1cd9bf7531e730eee0e48829fb2f2262509a9acb9a58a449d07c2908258bae"
	I1207 23:36:45.260306  681318 cri.go:89] found id: "b16beb4e4b195daeeefa06631cdab33892ab5de00e1eaa4f3d42a32591fc4c36"
	I1207 23:36:45.260311  681318 cri.go:89] found id: "1fde05929ea13b803231bae6fb303618dc3a2b54347fde44f9fc6cbc20d0c478"
	I1207 23:36:45.260317  681318 cri.go:89] found id: "09b2ae0a7c5b9e30441c564fc12ee45fca2591d70a3b0c4f829362d1f7b1c11c"
	I1207 23:36:45.260321  681318 cri.go:89] found id: "20259f47f9c60903d1615e570f4a362857f9df6b8c1ceeeb7dae4a4a6bddec57"
	I1207 23:36:45.260358  681318 cri.go:89] found id: "60889310640bb67836703a1f3f74d931394169d4bb63a245566fc54bf5762844"
	I1207 23:36:45.260369  681318 cri.go:89] found id: ""
	I1207 23:36:45.260416  681318 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 23:36:45.275521  681318 out.go:203] 
	W1207 23:36:45.276669  681318 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:36:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:36:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1207 23:36:45.276704  681318 out.go:285] * 
	* 
	W1207 23:36:45.282613  681318 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 23:36:45.283598  681318 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-858719 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-858719
helpers_test.go:243: (dbg) docker inspect newest-cni-858719:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a277f941d9190e49275105cd1f19ecb686250ba6117a13149b83ad3f828022d0",
	        "Created": "2025-12-07T23:36:01.669904707Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 678149,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:36:30.514754277Z",
	            "FinishedAt": "2025-12-07T23:36:29.411067332Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/a277f941d9190e49275105cd1f19ecb686250ba6117a13149b83ad3f828022d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a277f941d9190e49275105cd1f19ecb686250ba6117a13149b83ad3f828022d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/a277f941d9190e49275105cd1f19ecb686250ba6117a13149b83ad3f828022d0/hosts",
	        "LogPath": "/var/lib/docker/containers/a277f941d9190e49275105cd1f19ecb686250ba6117a13149b83ad3f828022d0/a277f941d9190e49275105cd1f19ecb686250ba6117a13149b83ad3f828022d0-json.log",
	        "Name": "/newest-cni-858719",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-858719:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-858719",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a277f941d9190e49275105cd1f19ecb686250ba6117a13149b83ad3f828022d0",
	                "LowerDir": "/var/lib/docker/overlay2/c1a2963994212dfc7e08a1440d19707a2cf4a7d92846359bfe33ec782362bc68-init/diff:/var/lib/docker/overlay2/d2e9c5481c0f5ed3745e4b3c85b207e8e3f273f5a1d285f7bc7bfa20976ad16e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c1a2963994212dfc7e08a1440d19707a2cf4a7d92846359bfe33ec782362bc68/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c1a2963994212dfc7e08a1440d19707a2cf4a7d92846359bfe33ec782362bc68/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c1a2963994212dfc7e08a1440d19707a2cf4a7d92846359bfe33ec782362bc68/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-858719",
	                "Source": "/var/lib/docker/volumes/newest-cni-858719/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-858719",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-858719",
	                "name.minikube.sigs.k8s.io": "newest-cni-858719",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b787ab5937ad70d3972573f353a6c0068f443d650fb3187cbc511a004d0ecdc8",
	            "SandboxKey": "/var/run/docker/netns/b787ab5937ad",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-858719": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "688a12ba5396bc3f2e98a59b391778bc7eb9ccbb9500e4ff61c9584eece383c6",
	                    "EndpointID": "a84440903c82dd8dad9d3d6506eb0e2b53cb429c9101dece6d93a83b5d5bdaa5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "72:1f:c5:d7:a2:a0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-858719",
	                        "a277f941d919"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-858719 -n newest-cni-858719
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-858719 -n newest-cni-858719: exit status 2 (399.316475ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-858719 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-858719 logs -n 25: (1.118803448s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ pause   │ -p old-k8s-version-320477 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ delete  │ -p old-k8s-version-320477                                                                                                                                                                                                                            │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p kubernetes-upgrade-703538 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-703538    │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ start   │ -p kubernetes-upgrade-703538 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-703538    │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ delete  │ -p old-k8s-version-320477                                                                                                                                                                                                                            │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ delete  │ -p disable-driver-mounts-837628                                                                                                                                                                                                                      │ disable-driver-mounts-837628 │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p default-k8s-diff-port-312944 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-312944 │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:36 UTC │
	│ delete  │ -p kubernetes-upgrade-703538                                                                                                                                                                                                                         │ kubernetes-upgrade-703538    │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p newest-cni-858719 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-654118 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ stop    │ -p embed-certs-654118 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ image   │ no-preload-313006 image list --format=json                                                                                                                                                                                                           │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ pause   │ -p no-preload-313006 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	│ delete  │ -p no-preload-313006                                                                                                                                                                                                                                 │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-858719 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-654118 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ delete  │ -p no-preload-313006                                                                                                                                                                                                                                 │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ start   │ -p embed-certs-654118 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	│ start   │ -p auto-600852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	│ stop    │ -p newest-cni-858719 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-858719 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ start   │ -p newest-cni-858719 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ image   │ newest-cni-858719 image list --format=json                                                                                                                                                                                                           │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ pause   │ -p newest-cni-858719 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-312944 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-312944 │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:36:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:36:30.199382  677704 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:36:30.199678  677704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:36:30.199690  677704 out.go:374] Setting ErrFile to fd 2...
	I1207 23:36:30.199696  677704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:36:30.199985  677704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:36:30.200696  677704 out.go:368] Setting JSON to false
	I1207 23:36:30.202255  677704 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8334,"bootTime":1765142256,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:36:30.202356  677704 start.go:143] virtualization: kvm guest
	I1207 23:36:30.204485  677704 out.go:179] * [newest-cni-858719] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:36:30.206079  677704 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:36:30.206102  677704 notify.go:221] Checking for updates...
	I1207 23:36:30.208549  677704 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:36:30.209775  677704 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:36:30.214561  677704 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:36:30.215983  677704 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:36:30.217521  677704 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:36:30.219339  677704 config.go:182] Loaded profile config "newest-cni-858719": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:36:30.220075  677704 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:36:30.244737  677704 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:36:30.244935  677704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:36:30.311650  677704 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-07 23:36:30.299453318 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:36:30.311817  677704 docker.go:319] overlay module found
	I1207 23:36:30.315570  677704 out.go:179] * Using the docker driver based on existing profile
	I1207 23:36:30.317497  677704 start.go:309] selected driver: docker
	I1207 23:36:30.317524  677704 start.go:927] validating driver "docker" against &{Name:newest-cni-858719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:36:30.317669  677704 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:36:30.318487  677704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:36:30.399830  677704 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-07 23:36:30.383304383 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:36:30.401873  677704 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1207 23:36:30.401972  677704 cni.go:84] Creating CNI manager for ""
	I1207 23:36:30.402072  677704 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:36:30.402132  677704 start.go:353] cluster config:
	{Name:newest-cni-858719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:36:30.404155  677704 out.go:179] * Starting "newest-cni-858719" primary control-plane node in "newest-cni-858719" cluster
	I1207 23:36:30.405367  677704 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:36:30.406789  677704 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:36:30.408087  677704 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1207 23:36:30.408131  677704 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1207 23:36:30.408146  677704 cache.go:65] Caching tarball of preloaded images
	I1207 23:36:30.408265  677704 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:36:30.408277  677704 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1207 23:36:30.408426  677704 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/config.json ...
	I1207 23:36:30.408463  677704 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:36:30.446133  677704 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:36:30.446161  677704 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:36:30.446179  677704 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:36:30.446224  677704 start.go:360] acquireMachinesLock for newest-cni-858719: {Name:mk3f9783a06cd72eff911e9615fc59e854b06695 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:36:30.446291  677704 start.go:364] duration metric: took 37.32µs to acquireMachinesLock for "newest-cni-858719"
	I1207 23:36:30.446316  677704 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:36:30.446340  677704 fix.go:54] fixHost starting: 
	I1207 23:36:30.446637  677704 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:30.475469  677704 fix.go:112] recreateIfNeeded on newest-cni-858719: state=Stopped err=<nil>
	W1207 23:36:30.475505  677704 fix.go:138] unexpected machine state, will restart: <nil>
	I1207 23:36:30.014058  673565 start.go:296] duration metric: took 177.314443ms for postStartSetup
	I1207 23:36:30.014519  673565 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-600852
	I1207 23:36:30.038610  673565 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/config.json ...
	I1207 23:36:30.038964  673565 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:36:30.039016  673565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-600852
	I1207 23:36:30.065777  673565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/auto-600852/id_rsa Username:docker}
	I1207 23:36:30.162043  673565 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:36:30.167397  673565 start.go:128] duration metric: took 9.888881461s to createHost
	I1207 23:36:30.167425  673565 start.go:83] releasing machines lock for "auto-600852", held for 9.889029296s
	I1207 23:36:30.167504  673565 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-600852
	I1207 23:36:30.187852  673565 ssh_runner.go:195] Run: cat /version.json
	I1207 23:36:30.187898  673565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-600852
	I1207 23:36:30.187900  673565 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:36:30.187983  673565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-600852
	I1207 23:36:30.209183  673565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/auto-600852/id_rsa Username:docker}
	I1207 23:36:30.209573  673565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/auto-600852/id_rsa Username:docker}
	I1207 23:36:30.375401  673565 ssh_runner.go:195] Run: systemctl --version
	I1207 23:36:30.387970  673565 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:36:30.451937  673565 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:36:30.463288  673565 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:36:30.463383  673565 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:36:30.500519  673565 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 23:36:30.500548  673565 start.go:496] detecting cgroup driver to use...
	I1207 23:36:30.500586  673565 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:36:30.500644  673565 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:36:30.523553  673565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:36:30.542090  673565 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:36:30.542193  673565 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:36:30.562685  673565 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:36:30.590093  673565 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:36:30.714368  673565 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:36:30.834479  673565 docker.go:234] disabling docker service ...
	I1207 23:36:30.834549  673565 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:36:30.869941  673565 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:36:30.894568  673565 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:36:31.002667  673565 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:36:31.119924  673565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:36:31.142153  673565 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:36:31.163106  673565 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:36:31.163177  673565 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:31.174874  673565 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:36:31.174957  673565 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:31.186962  673565 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:31.197787  673565 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:31.208567  673565 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:36:31.217977  673565 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:31.228985  673565 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:31.243864  673565 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:31.253438  673565 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:36:31.261577  673565 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:36:31.269437  673565 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:36:31.349977  673565 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:36:31.501537  673565 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:36:31.501610  673565 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:36:31.507081  673565 start.go:564] Will wait 60s for crictl version
	I1207 23:36:31.507153  673565 ssh_runner.go:195] Run: which crictl
	I1207 23:36:31.511425  673565 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:36:31.539351  673565 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:36:31.539441  673565 ssh_runner.go:195] Run: crio --version
	I1207 23:36:31.569558  673565 ssh_runner.go:195] Run: crio --version
	I1207 23:36:31.600664  673565 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1207 23:36:30.349629  663227 node_ready.go:57] node "default-k8s-diff-port-312944" has "Ready":"False" status (will retry)
	I1207 23:36:30.849610  663227 node_ready.go:49] node "default-k8s-diff-port-312944" is "Ready"
	I1207 23:36:30.849651  663227 node_ready.go:38] duration metric: took 11.006384498s for node "default-k8s-diff-port-312944" to be "Ready" ...
	I1207 23:36:30.849671  663227 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:36:30.849731  663227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:36:30.872873  663227 api_server.go:72] duration metric: took 11.455368709s to wait for apiserver process to appear ...
	I1207 23:36:30.873121  663227 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:36:30.873147  663227 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1207 23:36:30.882134  663227 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1207 23:36:30.883434  663227 api_server.go:141] control plane version: v1.34.2
	I1207 23:36:30.883472  663227 api_server.go:131] duration metric: took 10.341551ms to wait for apiserver health ...
	I1207 23:36:30.883493  663227 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:36:30.888989  663227 system_pods.go:59] 8 kube-system pods found
	I1207 23:36:30.889030  663227 system_pods.go:61] "coredns-66bc5c9577-p4v2f" [113d6978-708b-4941-acbc-0fa4a639f318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:36:30.889038  663227 system_pods.go:61] "etcd-default-k8s-diff-port-312944" [569e31ea-e77d-4156-a9f2-0970afca17bd] Running
	I1207 23:36:30.889046  663227 system_pods.go:61] "kindnet-55xbl" [627ffd8d-a2eb-4d9c-b1bc-a71f609273bc] Running
	I1207 23:36:30.889052  663227 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-312944" [a2d3f5cd-a118-448c-a233-a6fe616b5b6d] Running
	I1207 23:36:30.889058  663227 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-312944" [b5eaf61f-ba8d-4d44-8f2c-eb9ebae5e285] Running
	I1207 23:36:30.889063  663227 system_pods.go:61] "kube-proxy-7stg5" [b7e00d0a-bd16-45c1-a58c-e0569a0bcb33] Running
	I1207 23:36:30.889069  663227 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-312944" [ddd21134-7272-4134-8cc5-5fd8abb6abf5] Running
	I1207 23:36:30.889076  663227 system_pods.go:61] "storage-provisioner" [adffbdc2-708d-4f45-9f91-1697332156e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:36:30.889086  663227 system_pods.go:74] duration metric: took 5.585227ms to wait for pod list to return data ...
	I1207 23:36:30.889097  663227 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:36:30.892279  663227 default_sa.go:45] found service account: "default"
	I1207 23:36:30.892306  663227 default_sa.go:55] duration metric: took 3.201148ms for default service account to be created ...
	I1207 23:36:30.892318  663227 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 23:36:30.896636  663227 system_pods.go:86] 8 kube-system pods found
	I1207 23:36:30.896687  663227 system_pods.go:89] "coredns-66bc5c9577-p4v2f" [113d6978-708b-4941-acbc-0fa4a639f318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:36:30.896696  663227 system_pods.go:89] "etcd-default-k8s-diff-port-312944" [569e31ea-e77d-4156-a9f2-0970afca17bd] Running
	I1207 23:36:30.896704  663227 system_pods.go:89] "kindnet-55xbl" [627ffd8d-a2eb-4d9c-b1bc-a71f609273bc] Running
	I1207 23:36:30.896710  663227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-312944" [a2d3f5cd-a118-448c-a233-a6fe616b5b6d] Running
	I1207 23:36:30.896735  663227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-312944" [b5eaf61f-ba8d-4d44-8f2c-eb9ebae5e285] Running
	I1207 23:36:30.896745  663227 system_pods.go:89] "kube-proxy-7stg5" [b7e00d0a-bd16-45c1-a58c-e0569a0bcb33] Running
	I1207 23:36:30.896751  663227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-312944" [ddd21134-7272-4134-8cc5-5fd8abb6abf5] Running
	I1207 23:36:30.896758  663227 system_pods.go:89] "storage-provisioner" [adffbdc2-708d-4f45-9f91-1697332156e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:36:30.896786  663227 retry.go:31] will retry after 222.292044ms: missing components: kube-dns
	I1207 23:36:31.126979  663227 system_pods.go:86] 8 kube-system pods found
	I1207 23:36:31.127080  663227 system_pods.go:89] "coredns-66bc5c9577-p4v2f" [113d6978-708b-4941-acbc-0fa4a639f318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:36:31.127100  663227 system_pods.go:89] "etcd-default-k8s-diff-port-312944" [569e31ea-e77d-4156-a9f2-0970afca17bd] Running
	I1207 23:36:31.127109  663227 system_pods.go:89] "kindnet-55xbl" [627ffd8d-a2eb-4d9c-b1bc-a71f609273bc] Running
	I1207 23:36:31.127120  663227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-312944" [a2d3f5cd-a118-448c-a233-a6fe616b5b6d] Running
	I1207 23:36:31.127129  663227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-312944" [b5eaf61f-ba8d-4d44-8f2c-eb9ebae5e285] Running
	I1207 23:36:31.127135  663227 system_pods.go:89] "kube-proxy-7stg5" [b7e00d0a-bd16-45c1-a58c-e0569a0bcb33] Running
	I1207 23:36:31.127139  663227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-312944" [ddd21134-7272-4134-8cc5-5fd8abb6abf5] Running
	I1207 23:36:31.127147  663227 system_pods.go:89] "storage-provisioner" [adffbdc2-708d-4f45-9f91-1697332156e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:36:31.127169  663227 retry.go:31] will retry after 307.291664ms: missing components: kube-dns
	I1207 23:36:31.440222  663227 system_pods.go:86] 8 kube-system pods found
	I1207 23:36:31.440265  663227 system_pods.go:89] "coredns-66bc5c9577-p4v2f" [113d6978-708b-4941-acbc-0fa4a639f318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:36:31.440273  663227 system_pods.go:89] "etcd-default-k8s-diff-port-312944" [569e31ea-e77d-4156-a9f2-0970afca17bd] Running
	I1207 23:36:31.440283  663227 system_pods.go:89] "kindnet-55xbl" [627ffd8d-a2eb-4d9c-b1bc-a71f609273bc] Running
	I1207 23:36:31.440290  663227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-312944" [a2d3f5cd-a118-448c-a233-a6fe616b5b6d] Running
	I1207 23:36:31.440295  663227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-312944" [b5eaf61f-ba8d-4d44-8f2c-eb9ebae5e285] Running
	I1207 23:36:31.440302  663227 system_pods.go:89] "kube-proxy-7stg5" [b7e00d0a-bd16-45c1-a58c-e0569a0bcb33] Running
	I1207 23:36:31.440307  663227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-312944" [ddd21134-7272-4134-8cc5-5fd8abb6abf5] Running
	I1207 23:36:31.440314  663227 system_pods.go:89] "storage-provisioner" [adffbdc2-708d-4f45-9f91-1697332156e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:36:31.440354  663227 retry.go:31] will retry after 426.001876ms: missing components: kube-dns
	I1207 23:36:31.871913  663227 system_pods.go:86] 8 kube-system pods found
	I1207 23:36:31.871946  663227 system_pods.go:89] "coredns-66bc5c9577-p4v2f" [113d6978-708b-4941-acbc-0fa4a639f318] Running
	I1207 23:36:31.871953  663227 system_pods.go:89] "etcd-default-k8s-diff-port-312944" [569e31ea-e77d-4156-a9f2-0970afca17bd] Running
	I1207 23:36:31.871957  663227 system_pods.go:89] "kindnet-55xbl" [627ffd8d-a2eb-4d9c-b1bc-a71f609273bc] Running
	I1207 23:36:31.871961  663227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-312944" [a2d3f5cd-a118-448c-a233-a6fe616b5b6d] Running
	I1207 23:36:31.871968  663227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-312944" [b5eaf61f-ba8d-4d44-8f2c-eb9ebae5e285] Running
	I1207 23:36:31.871973  663227 system_pods.go:89] "kube-proxy-7stg5" [b7e00d0a-bd16-45c1-a58c-e0569a0bcb33] Running
	I1207 23:36:31.871978  663227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-312944" [ddd21134-7272-4134-8cc5-5fd8abb6abf5] Running
	I1207 23:36:31.871982  663227 system_pods.go:89] "storage-provisioner" [adffbdc2-708d-4f45-9f91-1697332156e3] Running
	I1207 23:36:31.871993  663227 system_pods.go:126] duration metric: took 979.653637ms to wait for k8s-apps to be running ...
	I1207 23:36:31.872008  663227 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:36:31.872059  663227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:36:31.886268  663227 system_svc.go:56] duration metric: took 14.248421ms WaitForService to wait for kubelet
	I1207 23:36:31.886301  663227 kubeadm.go:587] duration metric: took 12.468803502s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:36:31.886319  663227 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:36:31.889484  663227 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:36:31.889518  663227 node_conditions.go:123] node cpu capacity is 8
	I1207 23:36:31.889536  663227 node_conditions.go:105] duration metric: took 3.211978ms to run NodePressure ...
	I1207 23:36:31.889549  663227 start.go:242] waiting for startup goroutines ...
	I1207 23:36:31.889557  663227 start.go:247] waiting for cluster config update ...
	I1207 23:36:31.889567  663227 start.go:256] writing updated cluster config ...
	I1207 23:36:31.889825  663227 ssh_runner.go:195] Run: rm -f paused
	I1207 23:36:31.893873  663227 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:36:31.900462  663227 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p4v2f" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:31.905254  663227 pod_ready.go:94] pod "coredns-66bc5c9577-p4v2f" is "Ready"
	I1207 23:36:31.905281  663227 pod_ready.go:86] duration metric: took 4.791855ms for pod "coredns-66bc5c9577-p4v2f" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:31.908065  663227 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:31.913118  663227 pod_ready.go:94] pod "etcd-default-k8s-diff-port-312944" is "Ready"
	I1207 23:36:31.913140  663227 pod_ready.go:86] duration metric: took 5.030101ms for pod "etcd-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:31.914938  663227 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:31.918718  663227 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-312944" is "Ready"
	I1207 23:36:31.918742  663227 pod_ready.go:86] duration metric: took 3.786001ms for pod "kube-apiserver-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:31.920411  663227 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:32.299220  663227 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-312944" is "Ready"
	I1207 23:36:32.299254  663227 pod_ready.go:86] duration metric: took 378.816082ms for pod "kube-controller-manager-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:32.498428  663227 pod_ready.go:83] waiting for pod "kube-proxy-7stg5" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:32.898763  663227 pod_ready.go:94] pod "kube-proxy-7stg5" is "Ready"
	I1207 23:36:32.898796  663227 pod_ready.go:86] duration metric: took 400.341199ms for pod "kube-proxy-7stg5" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:33.099537  663227 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:33.499044  663227 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-312944" is "Ready"
	I1207 23:36:33.499080  663227 pod_ready.go:86] duration metric: took 399.514812ms for pod "kube-scheduler-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:33.499097  663227 pod_ready.go:40] duration metric: took 1.605186446s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:36:33.554736  663227 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1207 23:36:33.556778  663227 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-312944" cluster and "default" namespace by default
	I1207 23:36:30.017812  673247 addons.go:530] duration metric: took 2.349839549s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1207 23:36:30.500546  673247 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1207 23:36:30.506524  673247 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1207 23:36:30.506554  673247 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1207 23:36:30.999819  673247 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1207 23:36:31.005585  673247 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1207 23:36:31.006742  673247 api_server.go:141] control plane version: v1.34.2
	I1207 23:36:31.006775  673247 api_server.go:131] duration metric: took 1.007113458s to wait for apiserver health ...
	I1207 23:36:31.006788  673247 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:36:31.011558  673247 system_pods.go:59] 8 kube-system pods found
	I1207 23:36:31.011611  673247 system_pods.go:61] "coredns-66bc5c9577-wvgqf" [80c1683b-a66c-4dd4-8d91-0e5cc2bd5e18] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:36:31.011624  673247 system_pods.go:61] "etcd-embed-certs-654118" [b79ec937-fed7-4df6-9a57-24d6513402e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:36:31.011635  673247 system_pods.go:61] "kindnet-68q87" [7fc0d1b0-080b-4e1c-b7b4-cd23aa94620a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1207 23:36:31.011645  673247 system_pods.go:61] "kube-apiserver-embed-certs-654118" [f6fab7ae-3dd9-48d2-8b83-9f72e33bbee1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:36:31.011655  673247 system_pods.go:61] "kube-controller-manager-embed-certs-654118" [9748b389-d642-4475-bc81-39199511f4d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:36:31.011664  673247 system_pods.go:61] "kube-proxy-l75b2" [2f061a54-3641-473d-9c6a-77e51062e955] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 23:36:31.011671  673247 system_pods.go:61] "kube-scheduler-embed-certs-654118" [eb585812-9353-43b0-a610-34f3fcb6d32f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:36:31.011678  673247 system_pods.go:61] "storage-provisioner" [34685d0c-67b3-4683-b817-772fa2ef1c77] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:36:31.011701  673247 system_pods.go:74] duration metric: took 4.903872ms to wait for pod list to return data ...
	I1207 23:36:31.011712  673247 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:36:31.014761  673247 default_sa.go:45] found service account: "default"
	I1207 23:36:31.014791  673247 default_sa.go:55] duration metric: took 3.070892ms for default service account to be created ...
	I1207 23:36:31.014804  673247 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 23:36:31.018030  673247 system_pods.go:86] 8 kube-system pods found
	I1207 23:36:31.018077  673247 system_pods.go:89] "coredns-66bc5c9577-wvgqf" [80c1683b-a66c-4dd4-8d91-0e5cc2bd5e18] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:36:31.018089  673247 system_pods.go:89] "etcd-embed-certs-654118" [b79ec937-fed7-4df6-9a57-24d6513402e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:36:31.018098  673247 system_pods.go:89] "kindnet-68q87" [7fc0d1b0-080b-4e1c-b7b4-cd23aa94620a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1207 23:36:31.018106  673247 system_pods.go:89] "kube-apiserver-embed-certs-654118" [f6fab7ae-3dd9-48d2-8b83-9f72e33bbee1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:36:31.018121  673247 system_pods.go:89] "kube-controller-manager-embed-certs-654118" [9748b389-d642-4475-bc81-39199511f4d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:36:31.018134  673247 system_pods.go:89] "kube-proxy-l75b2" [2f061a54-3641-473d-9c6a-77e51062e955] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 23:36:31.018142  673247 system_pods.go:89] "kube-scheduler-embed-certs-654118" [eb585812-9353-43b0-a610-34f3fcb6d32f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:36:31.018148  673247 system_pods.go:89] "storage-provisioner" [34685d0c-67b3-4683-b817-772fa2ef1c77] Running
	I1207 23:36:31.018164  673247 system_pods.go:126] duration metric: took 3.352378ms to wait for k8s-apps to be running ...
	I1207 23:36:31.018176  673247 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:36:31.018232  673247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:36:31.034999  673247 system_svc.go:56] duration metric: took 16.811304ms WaitForService to wait for kubelet
	I1207 23:36:31.035038  673247 kubeadm.go:587] duration metric: took 3.36708951s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:36:31.035063  673247 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:36:31.037964  673247 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:36:31.037997  673247 node_conditions.go:123] node cpu capacity is 8
	I1207 23:36:31.038017  673247 node_conditions.go:105] duration metric: took 2.947717ms to run NodePressure ...
	I1207 23:36:31.038038  673247 start.go:242] waiting for startup goroutines ...
	I1207 23:36:31.038047  673247 start.go:247] waiting for cluster config update ...
	I1207 23:36:31.038060  673247 start.go:256] writing updated cluster config ...
	I1207 23:36:31.038388  673247 ssh_runner.go:195] Run: rm -f paused
	I1207 23:36:31.045933  673247 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:36:31.051360  673247 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wvgqf" in "kube-system" namespace to be "Ready" or be gone ...
	W1207 23:36:33.056839  673247 pod_ready.go:104] pod "coredns-66bc5c9577-wvgqf" is not "Ready", error: <nil>
	I1207 23:36:31.601878  673565 cli_runner.go:164] Run: docker network inspect auto-600852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:36:31.621720  673565 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1207 23:36:31.626504  673565 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:36:31.638820  673565 kubeadm.go:884] updating cluster {Name:auto-600852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:36:31.638979  673565 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:36:31.639045  673565 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:36:31.671512  673565 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:36:31.671537  673565 crio.go:433] Images already preloaded, skipping extraction
	I1207 23:36:31.671584  673565 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:36:31.698600  673565 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:36:31.698621  673565 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:36:31.698629  673565 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1207 23:36:31.698758  673565 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-600852 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:auto-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:36:31.698849  673565 ssh_runner.go:195] Run: crio config
	I1207 23:36:31.748038  673565 cni.go:84] Creating CNI manager for ""
	I1207 23:36:31.748064  673565 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:36:31.748082  673565 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 23:36:31.748110  673565 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-600852 NodeName:auto-600852 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:36:31.748274  673565 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-600852"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:36:31.748395  673565 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:36:31.757145  673565 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:36:31.757219  673565 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 23:36:31.766099  673565 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1207 23:36:31.779629  673565 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:36:31.800018  673565 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1207 23:36:31.817264  673565 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1207 23:36:31.822473  673565 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:36:31.834622  673565 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:36:31.928227  673565 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:36:31.958251  673565 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852 for IP: 192.168.85.2
	I1207 23:36:31.958272  673565 certs.go:195] generating shared ca certs ...
	I1207 23:36:31.958288  673565 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:31.958457  673565 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:36:31.958513  673565 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:36:31.958523  673565 certs.go:257] generating profile certs ...
	I1207 23:36:31.958577  673565 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/client.key
	I1207 23:36:31.958592  673565 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/client.crt with IP's: []
	I1207 23:36:32.182791  673565 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/client.crt ...
	I1207 23:36:32.182826  673565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/client.crt: {Name:mkcb703f0f9e4b0a56f30bafc152e39ee98c32af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:32.183061  673565 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/client.key ...
	I1207 23:36:32.183086  673565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/client.key: {Name:mk33e4c8c1a1e58f23780f89a8c200357fe9af2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:32.183245  673565 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.key.5c32f241
	I1207 23:36:32.183269  673565 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.crt.5c32f241 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1207 23:36:32.472518  673565 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.crt.5c32f241 ...
	I1207 23:36:32.472552  673565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.crt.5c32f241: {Name:mkd72f567c38cb3b6e2eeb019eb8803d7c9b9ebc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:32.472743  673565 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.key.5c32f241 ...
	I1207 23:36:32.472756  673565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.key.5c32f241: {Name:mk6a31094374001ab612b14e9c18e5030a69691d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:32.472836  673565 certs.go:382] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.crt.5c32f241 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.crt
	I1207 23:36:32.472933  673565 certs.go:386] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.key.5c32f241 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.key
	I1207 23:36:32.472997  673565 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.key
	I1207 23:36:32.473022  673565 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.crt with IP's: []
	I1207 23:36:32.610842  673565 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.crt ...
	I1207 23:36:32.610871  673565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.crt: {Name:mkdfed3c317c9a9b5274d2282923661c521bedc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:32.611075  673565 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.key ...
	I1207 23:36:32.611096  673565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.key: {Name:mk38fd78995b6a1d76b48fda10f3d7ef0f5e91f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:32.611376  673565 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:36:32.611433  673565 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:36:32.611449  673565 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:36:32.611509  673565 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:36:32.611544  673565 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:36:32.611577  673565 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:36:32.611637  673565 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:36:32.612219  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:36:32.631785  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:36:32.651000  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:36:32.670569  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:36:32.690024  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1207 23:36:32.708926  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 23:36:32.727240  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:36:32.751398  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 23:36:32.776129  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:36:32.799218  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:36:32.818906  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:36:32.839578  673565 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:36:32.853944  673565 ssh_runner.go:195] Run: openssl version
	I1207 23:36:32.860417  673565 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:32.869087  673565 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:36:32.877433  673565 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:32.881465  673565 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:32.881547  673565 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:32.920658  673565 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:36:32.928919  673565 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1207 23:36:32.937680  673565 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:36:32.945804  673565 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:36:32.955606  673565 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:36:32.959865  673565 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:36:32.959922  673565 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:36:32.996040  673565 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:36:33.004381  673565 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/393125.pem /etc/ssl/certs/51391683.0
	I1207 23:36:33.012360  673565 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:36:33.020201  673565 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:36:33.028224  673565 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:36:33.032626  673565 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:36:33.032716  673565 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:36:33.069017  673565 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:36:33.078318  673565 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3931252.pem /etc/ssl/certs/3ec20f2e.0
	I1207 23:36:33.086473  673565 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:36:33.090434  673565 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1207 23:36:33.090491  673565 kubeadm.go:401] StartCluster: {Name:auto-600852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:36:33.090588  673565 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:36:33.090632  673565 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:36:33.118539  673565 cri.go:89] found id: ""
	I1207 23:36:33.118605  673565 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:36:33.127222  673565 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 23:36:33.135780  673565 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1207 23:36:33.135833  673565 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 23:36:33.144151  673565 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 23:36:33.144172  673565 kubeadm.go:158] found existing configuration files:
	
	I1207 23:36:33.144215  673565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1207 23:36:33.152854  673565 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1207 23:36:33.152928  673565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1207 23:36:33.160896  673565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1207 23:36:33.168822  673565 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1207 23:36:33.168877  673565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1207 23:36:33.176284  673565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1207 23:36:33.184383  673565 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1207 23:36:33.184442  673565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1207 23:36:33.193714  673565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1207 23:36:33.202016  673565 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1207 23:36:33.202077  673565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1207 23:36:33.210129  673565 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1207 23:36:33.271747  673565 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1207 23:36:33.334835  673565 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 23:36:30.477140  677704 out.go:252] * Restarting existing docker container for "newest-cni-858719" ...
	I1207 23:36:30.477215  677704 cli_runner.go:164] Run: docker start newest-cni-858719
	I1207 23:36:30.809394  677704 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:30.836380  677704 kic.go:430] container "newest-cni-858719" state is running.
	I1207 23:36:30.836921  677704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-858719
	I1207 23:36:30.866477  677704 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/config.json ...
	I1207 23:36:30.866809  677704 machine.go:94] provisionDockerMachine start ...
	I1207 23:36:30.866882  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:30.898514  677704 main.go:143] libmachine: Using SSH client type: native
	I1207 23:36:30.898872  677704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1207 23:36:30.898893  677704 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:36:30.899781  677704 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50554->127.0.0.1:33473: read: connection reset by peer
	I1207 23:36:34.032697  677704 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-858719
	
	I1207 23:36:34.032735  677704 ubuntu.go:182] provisioning hostname "newest-cni-858719"
	I1207 23:36:34.032802  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:34.054768  677704 main.go:143] libmachine: Using SSH client type: native
	I1207 23:36:34.055076  677704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1207 23:36:34.055103  677704 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-858719 && echo "newest-cni-858719" | sudo tee /etc/hostname
	I1207 23:36:34.201076  677704 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-858719
	
	I1207 23:36:34.201188  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:34.220957  677704 main.go:143] libmachine: Using SSH client type: native
	I1207 23:36:34.221305  677704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1207 23:36:34.221350  677704 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-858719' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-858719/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-858719' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:36:34.354180  677704 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:36:34.354212  677704 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:36:34.354255  677704 ubuntu.go:190] setting up certificates
	I1207 23:36:34.354268  677704 provision.go:84] configureAuth start
	I1207 23:36:34.354381  677704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-858719
	I1207 23:36:34.372396  677704 provision.go:143] copyHostCerts
	I1207 23:36:34.372463  677704 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:36:34.372474  677704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:36:34.372543  677704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:36:34.372653  677704 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:36:34.372662  677704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:36:34.372691  677704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:36:34.372767  677704 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:36:34.372775  677704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:36:34.372800  677704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:36:34.372863  677704 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.newest-cni-858719 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-858719]
	I1207 23:36:34.438526  677704 provision.go:177] copyRemoteCerts
	I1207 23:36:34.438610  677704 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:36:34.438661  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:34.457056  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:34.550753  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:36:34.569684  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1207 23:36:34.587851  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 23:36:34.605253  677704 provision.go:87] duration metric: took 250.964673ms to configureAuth
	I1207 23:36:34.605281  677704 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:36:34.605478  677704 config.go:182] Loaded profile config "newest-cni-858719": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:36:34.605592  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:34.623964  677704 main.go:143] libmachine: Using SSH client type: native
	I1207 23:36:34.624277  677704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1207 23:36:34.624303  677704 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:36:34.919543  677704 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:36:34.919573  677704 machine.go:97] duration metric: took 4.052749993s to provisionDockerMachine
	I1207 23:36:34.919588  677704 start.go:293] postStartSetup for "newest-cni-858719" (driver="docker")
	I1207 23:36:34.919604  677704 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:36:34.919670  677704 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:36:34.919713  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:34.940317  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:35.042131  677704 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:36:35.047382  677704 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:36:35.047431  677704 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:36:35.047446  677704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:36:35.047504  677704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:36:35.047605  677704 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:36:35.047744  677704 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:36:35.059463  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:36:35.084378  677704 start.go:296] duration metric: took 164.724573ms for postStartSetup
	I1207 23:36:35.084483  677704 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:36:35.084536  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:35.108317  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:35.212214  677704 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:36:35.219280  677704 fix.go:56] duration metric: took 4.772929293s for fixHost
	I1207 23:36:35.219313  677704 start.go:83] releasing machines lock for "newest-cni-858719", held for 4.773005701s
	I1207 23:36:35.219452  677704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-858719
	I1207 23:36:35.245630  677704 ssh_runner.go:195] Run: cat /version.json
	I1207 23:36:35.245689  677704 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:36:35.245694  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:35.245779  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:35.270514  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:35.270842  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:35.457960  677704 ssh_runner.go:195] Run: systemctl --version
	I1207 23:36:35.466947  677704 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:36:35.513529  677704 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:36:35.519931  677704 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:36:35.520007  677704 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:36:35.531091  677704 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 23:36:35.531122  677704 start.go:496] detecting cgroup driver to use...
	I1207 23:36:35.531158  677704 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:36:35.531220  677704 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:36:35.552715  677704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:36:35.570570  677704 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:36:35.570644  677704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:36:35.591911  677704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:36:35.609216  677704 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:36:35.730291  677704 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:36:35.849860  677704 docker.go:234] disabling docker service ...
	I1207 23:36:35.849939  677704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:36:35.870164  677704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:36:35.887316  677704 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:36:36.010320  677704 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:36:36.134166  677704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:36:36.151763  677704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:36:36.171658  677704 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:36:36.171724  677704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:36.185507  677704 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:36:36.185577  677704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:36.199807  677704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:36.212561  677704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:36.224857  677704 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:36:36.236376  677704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:36.248851  677704 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:36.260134  677704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:36.271388  677704 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:36:36.282450  677704 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:36:36.292401  677704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:36:36.402590  677704 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:36:36.781588  677704 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:36:36.781654  677704 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:36:36.787090  677704 start.go:564] Will wait 60s for crictl version
	I1207 23:36:36.787149  677704 ssh_runner.go:195] Run: which crictl
	I1207 23:36:36.792213  677704 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:36:36.824404  677704 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:36:36.824506  677704 ssh_runner.go:195] Run: crio --version
	I1207 23:36:36.862950  677704 ssh_runner.go:195] Run: crio --version
	I1207 23:36:36.905770  677704 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1207 23:36:36.907106  677704 cli_runner.go:164] Run: docker network inspect newest-cni-858719 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:36:36.931941  677704 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1207 23:36:36.937364  677704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:36:36.953376  677704 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1207 23:36:36.954739  677704 kubeadm.go:884] updating cluster {Name:newest-cni-858719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:36:36.954910  677704 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1207 23:36:36.954978  677704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:36:37.001232  677704 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:36:37.001289  677704 crio.go:433] Images already preloaded, skipping extraction
	I1207 23:36:37.001372  677704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:36:37.035868  677704 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:36:37.035911  677704 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:36:37.035920  677704 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1207 23:36:37.036047  677704 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-858719 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:36:37.036135  677704 ssh_runner.go:195] Run: crio config
	I1207 23:36:37.100859  677704 cni.go:84] Creating CNI manager for ""
	I1207 23:36:37.100891  677704 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:36:37.100916  677704 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1207 23:36:37.100949  677704 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-858719 NodeName:newest-cni-858719 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:36:37.101134  677704 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-858719"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:36:37.101225  677704 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1207 23:36:37.112723  677704 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:36:37.112803  677704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 23:36:37.124443  677704 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1207 23:36:37.142815  677704 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1207 23:36:37.160115  677704 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1207 23:36:37.177233  677704 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1207 23:36:37.182248  677704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:36:37.195883  677704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:36:37.321978  677704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:36:37.349434  677704 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719 for IP: 192.168.76.2
	I1207 23:36:37.349460  677704 certs.go:195] generating shared ca certs ...
	I1207 23:36:37.349483  677704 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:37.349673  677704 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:36:37.349732  677704 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:36:37.349742  677704 certs.go:257] generating profile certs ...
	I1207 23:36:37.349907  677704 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/client.key
	I1207 23:36:37.349978  677704 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.key.81fe4363
	I1207 23:36:37.350036  677704 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.key
	I1207 23:36:37.350178  677704 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:36:37.350217  677704 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:36:37.350228  677704 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:36:37.350264  677704 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:36:37.350296  677704 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:36:37.350347  677704 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:36:37.350407  677704 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:36:37.351226  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:36:37.377735  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:36:37.403808  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:36:37.427723  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:36:37.460810  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1207 23:36:37.487067  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 23:36:37.513861  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:36:37.539259  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 23:36:37.565376  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:36:37.592124  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:36:37.619212  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:36:37.647272  677704 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:36:37.667351  677704 ssh_runner.go:195] Run: openssl version
	I1207 23:36:37.676513  677704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:36:37.687971  677704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:36:37.699159  677704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:36:37.704977  677704 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:36:37.705049  677704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:36:37.765716  677704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:36:37.779131  677704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:36:37.793745  677704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:36:37.805547  677704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:36:37.811144  677704 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:36:37.811212  677704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:36:37.854651  677704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:36:37.863269  677704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:37.872157  677704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:36:37.881013  677704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:37.886652  677704 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:37.886726  677704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:37.925060  677704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:36:37.933601  677704 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:36:37.937936  677704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 23:36:37.974013  677704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 23:36:38.011069  677704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 23:36:38.048975  677704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 23:36:38.089220  677704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 23:36:38.126552  677704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 23:36:38.171830  677704 kubeadm.go:401] StartCluster: {Name:newest-cni-858719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:36:38.171932  677704 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:36:38.171998  677704 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:36:38.202873  677704 cri.go:89] found id: ""
	I1207 23:36:38.202948  677704 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:36:38.211787  677704 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1207 23:36:38.211805  677704 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1207 23:36:38.211858  677704 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 23:36:38.220804  677704 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:36:38.221673  677704 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-858719" does not appear in /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:36:38.222177  677704 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-389542/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-858719" cluster setting kubeconfig missing "newest-cni-858719" context setting]
	I1207 23:36:38.222947  677704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:38.242108  677704 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 23:36:38.251961  677704 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1207 23:36:38.251999  677704 kubeadm.go:602] duration metric: took 40.189524ms to restartPrimaryControlPlane
	I1207 23:36:38.252009  677704 kubeadm.go:403] duration metric: took 80.190889ms to StartCluster
	I1207 23:36:38.252030  677704 settings.go:142] acquiring lock: {Name:mk372e79badb9c8f25216fa891cff6dfa96ea2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:38.252111  677704 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:36:38.253734  677704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:38.296126  677704 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:36:38.296231  677704 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1207 23:36:38.296364  677704 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-858719"
	I1207 23:36:38.296391  677704 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-858719"
	I1207 23:36:38.296385  677704 addons.go:70] Setting dashboard=true in profile "newest-cni-858719"
	W1207 23:36:38.296403  677704 addons.go:248] addon storage-provisioner should already be in state true
	I1207 23:36:38.296420  677704 addons.go:239] Setting addon dashboard=true in "newest-cni-858719"
	W1207 23:36:38.296437  677704 addons.go:248] addon dashboard should already be in state true
	I1207 23:36:38.296445  677704 host.go:66] Checking if "newest-cni-858719" exists ...
	I1207 23:36:38.296432  677704 addons.go:70] Setting default-storageclass=true in profile "newest-cni-858719"
	I1207 23:36:38.296468  677704 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-858719"
	I1207 23:36:38.296475  677704 host.go:66] Checking if "newest-cni-858719" exists ...
	I1207 23:36:38.296480  677704 config.go:182] Loaded profile config "newest-cni-858719": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:36:38.296903  677704 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:38.296913  677704 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:38.296916  677704 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:38.304834  677704 out.go:179] * Verifying Kubernetes components...
	I1207 23:36:38.321121  677704 addons.go:239] Setting addon default-storageclass=true in "newest-cni-858719"
	W1207 23:36:38.321142  677704 addons.go:248] addon default-storageclass should already be in state true
	I1207 23:36:38.321167  677704 host.go:66] Checking if "newest-cni-858719" exists ...
	I1207 23:36:38.321502  677704 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:38.331788  677704 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1207 23:36:38.331860  677704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:36:38.331806  677704 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 23:36:38.339675  677704 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:36:38.339781  677704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 23:36:38.339832  677704 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1207 23:36:35.058792  673247 pod_ready.go:104] pod "coredns-66bc5c9577-wvgqf" is not "Ready", error: <nil>
	W1207 23:36:37.059360  673247 pod_ready.go:104] pod "coredns-66bc5c9577-wvgqf" is not "Ready", error: <nil>
	W1207 23:36:39.558825  673247 pod_ready.go:104] pod "coredns-66bc5c9577-wvgqf" is not "Ready", error: <nil>
	I1207 23:36:38.339851  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:38.340452  677704 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 23:36:38.340471  677704 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 23:36:38.340521  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:38.362068  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:38.362162  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:38.362941  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1207 23:36:38.362965  677704 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1207 23:36:38.363025  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:38.392574  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:38.462983  677704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:36:38.484595  677704 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:36:38.484756  677704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:36:38.486717  677704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 23:36:38.491481  677704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:36:38.510448  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1207 23:36:38.510515  677704 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1207 23:36:38.536570  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1207 23:36:38.536602  677704 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1207 23:36:38.566084  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1207 23:36:38.566115  677704 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1207 23:36:38.600942  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1207 23:36:38.600972  677704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1207 23:36:38.609165  677704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1207 23:36:38.609215  677704 retry.go:31] will retry after 211.51386ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1207 23:36:38.609284  677704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1207 23:36:38.609300  677704 retry.go:31] will retry after 303.789465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1207 23:36:38.623815  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1207 23:36:38.624079  677704 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1207 23:36:38.653443  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1207 23:36:38.653478  677704 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1207 23:36:38.678913  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1207 23:36:38.678945  677704 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1207 23:36:38.701578  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1207 23:36:38.701607  677704 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1207 23:36:38.720445  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1207 23:36:38.720502  677704 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1207 23:36:38.743195  677704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1207 23:36:38.821620  677704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1207 23:36:38.913583  677704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:36:38.985710  677704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:36:41.415564  677704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.593900667s)
	I1207 23:36:41.416832  677704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.673583567s)
	I1207 23:36:41.418467  677704 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-858719 addons enable metrics-server
	
	I1207 23:36:41.532720  677704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.619029892s)
	I1207 23:36:41.533073  677704 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.547330372s)
	I1207 23:36:41.533100  677704 api_server.go:72] duration metric: took 3.236908876s to wait for apiserver process to appear ...
	I1207 23:36:41.533107  677704 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:36:41.533129  677704 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:36:41.534688  677704 out.go:179] * Enabled addons: dashboard, default-storageclass, storage-provisioner
	I1207 23:36:41.535780  677704 addons.go:530] duration metric: took 3.239558186s for enable addons: enabled=[dashboard default-storageclass storage-provisioner]
	I1207 23:36:41.541555  677704 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1207 23:36:41.541584  677704 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1207 23:36:42.033193  677704 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:36:42.038840  677704 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1207 23:36:42.040044  677704 api_server.go:141] control plane version: v1.35.0-beta.0
	I1207 23:36:42.040086  677704 api_server.go:131] duration metric: took 506.968227ms to wait for apiserver health ...
	I1207 23:36:42.040100  677704 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:36:42.044016  677704 system_pods.go:59] 8 kube-system pods found
	I1207 23:36:42.044061  677704 system_pods.go:61] "coredns-7d764666f9-dp6qz" [1403dc21-d613-4225-bf80-faf8d23e774c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1207 23:36:42.044076  677704 system_pods.go:61] "etcd-newest-cni-858719" [58c61faa-719b-477c-8216-d9aaa8554cec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:36:42.044091  677704 system_pods.go:61] "kindnet-5zzk9" [b8e05261-d743-488e-9543-b60973ff09b4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1207 23:36:42.044103  677704 system_pods.go:61] "kube-apiserver-newest-cni-858719" [343d3191-d091-4436-a131-68718cb68508] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:36:42.044116  677704 system_pods.go:61] "kube-controller-manager-newest-cni-858719" [c2876dc8-1228-4980-bd43-1d58fcd760f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:36:42.044131  677704 system_pods.go:61] "kube-proxy-p8v8n" [494a11f1-086c-43f3-92e7-4b59d073c5f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 23:36:42.044143  677704 system_pods.go:61] "kube-scheduler-newest-cni-858719" [28d72586-76c3-4f37-b20e-0c7de9fe90ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:36:42.044153  677704 system_pods.go:61] "storage-provisioner" [a39abdef-8c48-494a-9bb1-645330622d99] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1207 23:36:42.044176  677704 system_pods.go:74] duration metric: took 4.066756ms to wait for pod list to return data ...
	I1207 23:36:42.044190  677704 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:36:42.047787  677704 default_sa.go:45] found service account: "default"
	I1207 23:36:42.047814  677704 default_sa.go:55] duration metric: took 3.616282ms for default service account to be created ...
	I1207 23:36:42.047828  677704 kubeadm.go:587] duration metric: took 3.751636263s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1207 23:36:42.047853  677704 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:36:42.051921  677704 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:36:42.051998  677704 node_conditions.go:123] node cpu capacity is 8
	I1207 23:36:42.052034  677704 node_conditions.go:105] duration metric: took 4.174035ms to run NodePressure ...
	I1207 23:36:42.052060  677704 start.go:242] waiting for startup goroutines ...
	I1207 23:36:42.052081  677704 start.go:247] waiting for cluster config update ...
	I1207 23:36:42.052105  677704 start.go:256] writing updated cluster config ...
	I1207 23:36:42.052449  677704 ssh_runner.go:195] Run: rm -f paused
	I1207 23:36:42.126816  677704 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1207 23:36:42.128432  677704 out.go:179] * Done! kubectl is now configured to use "newest-cni-858719" cluster and "default" namespace by default
	W1207 23:36:41.560993  673247 pod_ready.go:104] pod "coredns-66bc5c9577-wvgqf" is not "Ready", error: <nil>
	W1207 23:36:44.058276  673247 pod_ready.go:104] pod "coredns-66bc5c9577-wvgqf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.747198973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.750789386Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1b886b1e-3a37-4b18-b7ef-ee93ded349aa name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.751173122Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=46cf6b0b-51d1-418d-8d21-a1d52b3ff0d9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.752844752Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.753614709Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.753691357Z" level=info msg="Ran pod sandbox f0cc146fbfa2a1f55885742a7303a3195bdbcee7f25ab7cee37c0298b36f7fb4 with infra container: kube-system/kube-proxy-p8v8n/POD" id=1b886b1e-3a37-4b18-b7ef-ee93ded349aa name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.754574862Z" level=info msg="Ran pod sandbox 8208f390c38e9dcdfcf1dfd3262eccb3135b1ec8cb4eb4f0e9b2b4f0efb64e68 with infra container: kube-system/kindnet-5zzk9/POD" id=46cf6b0b-51d1-418d-8d21-a1d52b3ff0d9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.755222239Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f34def23-74f3-478a-9146-9fa3c544446d name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.756077161Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=548a62cd-64e5-4c3a-940a-6218ab8fa99e name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.7562978Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=b863339c-9998-49c8-85be-69c1c3a2dea5 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.757619514Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ff33236a-fab2-4fcb-9795-4f69bfde768f name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.757733732Z" level=info msg="Creating container: kube-system/kube-proxy-p8v8n/kube-proxy" id=f275e99d-05a9-4dde-b458-dade5b2c408f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.757887357Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.759524416Z" level=info msg="Creating container: kube-system/kindnet-5zzk9/kindnet-cni" id=1c64a00c-405e-41ff-b845-6d525f9f5642 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.759615609Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.764618129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.765307363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.767144623Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.769037112Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.804715712Z" level=info msg="Created container cc1cd9bf7531e730eee0e48829fb2f2262509a9acb9a58a449d07c2908258bae: kube-system/kindnet-5zzk9/kindnet-cni" id=1c64a00c-405e-41ff-b845-6d525f9f5642 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.805879807Z" level=info msg="Starting container: cc1cd9bf7531e730eee0e48829fb2f2262509a9acb9a58a449d07c2908258bae" id=92081333-dcfa-413e-8df0-dc2b3200e29d name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.807915137Z" level=info msg="Created container b16beb4e4b195daeeefa06631cdab33892ab5de00e1eaa4f3d42a32591fc4c36: kube-system/kube-proxy-p8v8n/kube-proxy" id=f275e99d-05a9-4dde-b458-dade5b2c408f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.808423882Z" level=info msg="Started container" PID=1067 containerID=cc1cd9bf7531e730eee0e48829fb2f2262509a9acb9a58a449d07c2908258bae description=kube-system/kindnet-5zzk9/kindnet-cni id=92081333-dcfa-413e-8df0-dc2b3200e29d name=/runtime.v1.RuntimeService/StartContainer sandboxID=8208f390c38e9dcdfcf1dfd3262eccb3135b1ec8cb4eb4f0e9b2b4f0efb64e68
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.809147584Z" level=info msg="Starting container: b16beb4e4b195daeeefa06631cdab33892ab5de00e1eaa4f3d42a32591fc4c36" id=32407dc6-09a7-4323-93a3-569f2a1eca9d name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.812949113Z" level=info msg="Started container" PID=1068 containerID=b16beb4e4b195daeeefa06631cdab33892ab5de00e1eaa4f3d42a32591fc4c36 description=kube-system/kube-proxy-p8v8n/kube-proxy id=32407dc6-09a7-4323-93a3-569f2a1eca9d name=/runtime.v1.RuntimeService/StartContainer sandboxID=f0cc146fbfa2a1f55885742a7303a3195bdbcee7f25ab7cee37c0298b36f7fb4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	cc1cd9bf7531e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   8208f390c38e9       kindnet-5zzk9                               kube-system
	b16beb4e4b195       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   4 seconds ago       Running             kube-proxy                1                   f0cc146fbfa2a       kube-proxy-p8v8n                            kube-system
	1fde05929ea13       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   7 seconds ago       Running             kube-controller-manager   1                   89c40815e4472       kube-controller-manager-newest-cni-858719   kube-system
	09b2ae0a7c5b9       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   7 seconds ago       Running             etcd                      1                   313a36b7c28f9       etcd-newest-cni-858719                      kube-system
	20259f47f9c60       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   7 seconds ago       Running             kube-scheduler            1                   66e6af5a01eca       kube-scheduler-newest-cni-858719            kube-system
	60889310640bb       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   7 seconds ago       Running             kube-apiserver            1                   97a28c758cf7d       kube-apiserver-newest-cni-858719            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-858719
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-858719
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=newest-cni-858719
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_36_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:36:10 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-858719
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:36:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:36:40 +0000   Sun, 07 Dec 2025 23:36:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:36:40 +0000   Sun, 07 Dec 2025 23:36:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:36:40 +0000   Sun, 07 Dec 2025 23:36:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 07 Dec 2025 23:36:40 +0000   Sun, 07 Dec 2025 23:36:08 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-858719
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                2fe19260-c79d-4da0-b8eb-1e49571b8323
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-858719                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-5zzk9                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-858719             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-newest-cni-858719    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-p8v8n                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-858719             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  30s   node-controller  Node newest-cni-858719 event: Registered Node newest-cni-858719 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-858719 event: Registered Node newest-cni-858719 in Controller
	
	
	==> dmesg <==
	[  +0.006319] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.495443] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006323] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494714] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006745] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494455] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007157] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493953] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007413] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493695] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007143] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493798] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007702] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493076] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008458] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493060] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008891] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492811] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007996] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493243] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008588] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492559] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008931] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.491699] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.010378] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	
	
	==> etcd [09b2ae0a7c5b9e30441c564fc12ee45fca2591d70a3b0c4f829362d1f7b1c11c] <==
	{"level":"warn","ts":"2025-12-07T23:36:39.885101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.893280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.901964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.909898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.918210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.926951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.933804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.945516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.956317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.964116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.971294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.982312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.986484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.994580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:40.003959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:40.011043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:40.018731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:40.041796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:40.045822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:40.057905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:40.066450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:40.074513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:40.137833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54652","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T23:36:40.877544Z","caller":"traceutil/trace.go:172","msg":"trace[35235451] transaction","detail":"{read_only:false; number_of_response:1; response_revision:427; }","duration":"121.657656ms","start":"2025-12-07T23:36:40.755868Z","end":"2025-12-07T23:36:40.877525Z","steps":["trace[35235451] 'process raft request'  (duration: 121.043298ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-07T23:36:41.047639Z","caller":"traceutil/trace.go:172","msg":"trace[1342702817] transaction","detail":"{read_only:false; number_of_response:0; response_revision:433; }","duration":"121.561622ms","start":"2025-12-07T23:36:40.926036Z","end":"2025-12-07T23:36:41.047597Z","steps":["trace[1342702817] 'process raft request'  (duration: 96.242181ms)","trace[1342702817] 'compare'  (duration: 25.273928ms)"],"step_count":2}
	
	
	==> kernel <==
	 23:36:46 up  2:19,  0 user,  load average: 4.73, 2.89, 2.07
	Linux newest-cni-858719 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cc1cd9bf7531e730eee0e48829fb2f2262509a9acb9a58a449d07c2908258bae] <==
	I1207 23:36:42.048286       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 23:36:42.048568       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1207 23:36:42.048723       1 main.go:148] setting mtu 1500 for CNI 
	I1207 23:36:42.048748       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 23:36:42.048770       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T23:36:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 23:36:42.257782       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 23:36:42.348067       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 23:36:42.348126       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 23:36:42.348355       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 23:36:42.648217       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 23:36:42.648251       1 metrics.go:72] Registering metrics
	I1207 23:36:42.648828       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [60889310640bb67836703a1f3f74d931394169d4bb63a245566fc54bf5762844] <==
	I1207 23:36:40.754679       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:40.754918       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1207 23:36:40.754943       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:40.755103       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1207 23:36:40.755573       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1207 23:36:40.755654       1 aggregator.go:187] initial CRD sync complete...
	I1207 23:36:40.755686       1 autoregister_controller.go:144] Starting autoregister controller
	I1207 23:36:40.755712       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1207 23:36:40.755722       1 cache.go:39] Caches are synced for autoregister controller
	I1207 23:36:40.760055       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1207 23:36:40.762051       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1207 23:36:40.766075       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:40.801611       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1207 23:36:40.905818       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1207 23:36:41.200632       1 controller.go:667] quota admission added evaluator for: namespaces
	I1207 23:36:41.251471       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 23:36:41.285007       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 23:36:41.297758       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 23:36:41.313590       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 23:36:41.382159       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.125.6"}
	I1207 23:36:41.402410       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.227.102"}
	I1207 23:36:41.657235       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1207 23:36:44.418656       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 23:36:44.469747       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:36:44.517425       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [1fde05929ea13b803231bae6fb303618dc3a2b54347fde44f9fc6cbc20d0c478] <==
	I1207 23:36:43.921921       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-858719"
	I1207 23:36:43.922553       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1207 23:36:43.922583       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.922840       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.922889       1 range_allocator.go:177] "Sending events to api server"
	I1207 23:36:43.922938       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1207 23:36:43.922944       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:36:43.922949       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.922989       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.923019       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.921508       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.923051       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.923068       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.923045       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.923419       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.923446       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.923496       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.923525       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.923702       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.925999       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:36:43.936401       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:44.021908       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:44.021930       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1207 23:36:44.021936       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1207 23:36:44.026273       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [b16beb4e4b195daeeefa06631cdab33892ab5de00e1eaa4f3d42a32591fc4c36] <==
	I1207 23:36:41.861916       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:36:41.930396       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:36:42.030955       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:42.031023       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1207 23:36:42.031129       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:36:42.060916       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:36:42.060984       1 server_linux.go:136] "Using iptables Proxier"
	I1207 23:36:42.069384       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:36:42.071021       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1207 23:36:42.071247       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:36:42.078442       1 config.go:200] "Starting service config controller"
	I1207 23:36:42.078481       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:36:42.078623       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:36:42.078656       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:36:42.078683       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:36:42.078688       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:36:42.079316       1 config.go:309] "Starting node config controller"
	I1207 23:36:42.079356       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:36:42.079364       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:36:42.178648       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 23:36:42.178805       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 23:36:42.178802       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [20259f47f9c60903d1615e570f4a362857f9df6b8c1ceeeb7dae4a4a6bddec57] <==
	I1207 23:36:39.100406       1 serving.go:386] Generated self-signed cert in-memory
	W1207 23:36:40.716866       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 23:36:40.716909       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 23:36:40.716921       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 23:36:40.716931       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 23:36:40.737202       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1207 23:36:40.737237       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:36:40.740575       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:36:40.740605       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:36:40.740785       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 23:36:40.741532       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 23:36:40.840941       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: E1207 23:36:41.050089     665 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-858719\" already exists" pod="kube-system/kube-scheduler-newest-cni-858719"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: I1207 23:36:41.433704     665 apiserver.go:52] "Watching apiserver"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: I1207 23:36:41.443279     665 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: E1207 23:36:41.513463     665 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-858719" containerName="kube-controller-manager"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: I1207 23:36:41.514470     665 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-858719"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: I1207 23:36:41.513472     665 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-858719"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: I1207 23:36:41.513591     665 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-858719"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: I1207 23:36:41.516256     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/494a11f1-086c-43f3-92e7-4b59d073c5f9-xtables-lock\") pod \"kube-proxy-p8v8n\" (UID: \"494a11f1-086c-43f3-92e7-4b59d073c5f9\") " pod="kube-system/kube-proxy-p8v8n"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: I1207 23:36:41.516341     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/494a11f1-086c-43f3-92e7-4b59d073c5f9-lib-modules\") pod \"kube-proxy-p8v8n\" (UID: \"494a11f1-086c-43f3-92e7-4b59d073c5f9\") " pod="kube-system/kube-proxy-p8v8n"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: I1207 23:36:41.516380     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8e05261-d743-488e-9543-b60973ff09b4-xtables-lock\") pod \"kindnet-5zzk9\" (UID: \"b8e05261-d743-488e-9543-b60973ff09b4\") " pod="kube-system/kindnet-5zzk9"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: I1207 23:36:41.516403     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b8e05261-d743-488e-9543-b60973ff09b4-cni-cfg\") pod \"kindnet-5zzk9\" (UID: \"b8e05261-d743-488e-9543-b60973ff09b4\") " pod="kube-system/kindnet-5zzk9"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: I1207 23:36:41.516428     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8e05261-d743-488e-9543-b60973ff09b4-lib-modules\") pod \"kindnet-5zzk9\" (UID: \"b8e05261-d743-488e-9543-b60973ff09b4\") " pod="kube-system/kindnet-5zzk9"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: E1207 23:36:41.551109     665 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-858719\" already exists" pod="kube-system/kube-apiserver-newest-cni-858719"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: E1207 23:36:41.551647     665 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-858719\" already exists" pod="kube-system/kube-scheduler-newest-cni-858719"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: E1207 23:36:41.551275     665 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-858719" containerName="kube-apiserver"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: E1207 23:36:41.551846     665 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-858719" containerName="kube-scheduler"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: E1207 23:36:41.552644     665 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-858719\" already exists" pod="kube-system/etcd-newest-cni-858719"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: E1207 23:36:41.552820     665 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-858719" containerName="etcd"
	Dec 07 23:36:42 newest-cni-858719 kubelet[665]: E1207 23:36:42.520170     665 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-858719" containerName="etcd"
	Dec 07 23:36:42 newest-cni-858719 kubelet[665]: E1207 23:36:42.520236     665 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-858719" containerName="kube-apiserver"
	Dec 07 23:36:42 newest-cni-858719 kubelet[665]: E1207 23:36:42.520469     665 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-858719" containerName="kube-scheduler"
	Dec 07 23:36:43 newest-cni-858719 kubelet[665]: I1207 23:36:43.297292     665 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 07 23:36:43 newest-cni-858719 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 07 23:36:43 newest-cni-858719 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 07 23:36:43 newest-cni-858719 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-858719 -n newest-cni-858719
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-858719 -n newest-cni-858719: exit status 2 (334.123695ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-858719 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-dp6qz storage-provisioner dashboard-metrics-scraper-867fb5f87b-4z8k9 kubernetes-dashboard-b84665fb8-fsbs4
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-858719 describe pod coredns-7d764666f9-dp6qz storage-provisioner dashboard-metrics-scraper-867fb5f87b-4z8k9 kubernetes-dashboard-b84665fb8-fsbs4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-858719 describe pod coredns-7d764666f9-dp6qz storage-provisioner dashboard-metrics-scraper-867fb5f87b-4z8k9 kubernetes-dashboard-b84665fb8-fsbs4: exit status 1 (64.449708ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-dp6qz" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-4z8k9" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-fsbs4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-858719 describe pod coredns-7d764666f9-dp6qz storage-provisioner dashboard-metrics-scraper-867fb5f87b-4z8k9 kubernetes-dashboard-b84665fb8-fsbs4: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-858719
helpers_test.go:243: (dbg) docker inspect newest-cni-858719:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a277f941d9190e49275105cd1f19ecb686250ba6117a13149b83ad3f828022d0",
	        "Created": "2025-12-07T23:36:01.669904707Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 678149,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:36:30.514754277Z",
	            "FinishedAt": "2025-12-07T23:36:29.411067332Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/a277f941d9190e49275105cd1f19ecb686250ba6117a13149b83ad3f828022d0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a277f941d9190e49275105cd1f19ecb686250ba6117a13149b83ad3f828022d0/hostname",
	        "HostsPath": "/var/lib/docker/containers/a277f941d9190e49275105cd1f19ecb686250ba6117a13149b83ad3f828022d0/hosts",
	        "LogPath": "/var/lib/docker/containers/a277f941d9190e49275105cd1f19ecb686250ba6117a13149b83ad3f828022d0/a277f941d9190e49275105cd1f19ecb686250ba6117a13149b83ad3f828022d0-json.log",
	        "Name": "/newest-cni-858719",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-858719:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-858719",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a277f941d9190e49275105cd1f19ecb686250ba6117a13149b83ad3f828022d0",
	                "LowerDir": "/var/lib/docker/overlay2/c1a2963994212dfc7e08a1440d19707a2cf4a7d92846359bfe33ec782362bc68-init/diff:/var/lib/docker/overlay2/d2e9c5481c0f5ed3745e4b3c85b207e8e3f273f5a1d285f7bc7bfa20976ad16e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c1a2963994212dfc7e08a1440d19707a2cf4a7d92846359bfe33ec782362bc68/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c1a2963994212dfc7e08a1440d19707a2cf4a7d92846359bfe33ec782362bc68/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c1a2963994212dfc7e08a1440d19707a2cf4a7d92846359bfe33ec782362bc68/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-858719",
	                "Source": "/var/lib/docker/volumes/newest-cni-858719/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-858719",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-858719",
	                "name.minikube.sigs.k8s.io": "newest-cni-858719",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b787ab5937ad70d3972573f353a6c0068f443d650fb3187cbc511a004d0ecdc8",
	            "SandboxKey": "/var/run/docker/netns/b787ab5937ad",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-858719": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "688a12ba5396bc3f2e98a59b391778bc7eb9ccbb9500e4ff61c9584eece383c6",
	                    "EndpointID": "a84440903c82dd8dad9d3d6506eb0e2b53cb429c9101dece6d93a83b5d5bdaa5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "72:1f:c5:d7:a2:a0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-858719",
	                        "a277f941d919"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-858719 -n newest-cni-858719
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-858719 -n newest-cni-858719: exit status 2 (344.878379ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-858719 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-858719 logs -n 25: (1.003362682s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p old-k8s-version-320477                                                                                                                                                                                                                            │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p kubernetes-upgrade-703538 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-703538    │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ start   │ -p kubernetes-upgrade-703538 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-703538    │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ delete  │ -p old-k8s-version-320477                                                                                                                                                                                                                            │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ delete  │ -p disable-driver-mounts-837628                                                                                                                                                                                                                      │ disable-driver-mounts-837628 │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p default-k8s-diff-port-312944 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-312944 │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:36 UTC │
	│ delete  │ -p kubernetes-upgrade-703538                                                                                                                                                                                                                         │ kubernetes-upgrade-703538    │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p newest-cni-858719 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-654118 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ stop    │ -p embed-certs-654118 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ image   │ no-preload-313006 image list --format=json                                                                                                                                                                                                           │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ pause   │ -p no-preload-313006 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	│ delete  │ -p no-preload-313006                                                                                                                                                                                                                                 │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-858719 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-654118 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ delete  │ -p no-preload-313006                                                                                                                                                                                                                                 │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ start   │ -p embed-certs-654118 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	│ start   │ -p auto-600852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	│ stop    │ -p newest-cni-858719 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-858719 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ start   │ -p newest-cni-858719 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ image   │ newest-cni-858719 image list --format=json                                                                                                                                                                                                           │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ pause   │ -p newest-cni-858719 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-312944 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-312944 │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-312944 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-312944 │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:36:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:36:30.199382  677704 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:36:30.199678  677704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:36:30.199690  677704 out.go:374] Setting ErrFile to fd 2...
	I1207 23:36:30.199696  677704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:36:30.199985  677704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:36:30.200696  677704 out.go:368] Setting JSON to false
	I1207 23:36:30.202255  677704 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8334,"bootTime":1765142256,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:36:30.202356  677704 start.go:143] virtualization: kvm guest
	I1207 23:36:30.204485  677704 out.go:179] * [newest-cni-858719] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:36:30.206079  677704 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:36:30.206102  677704 notify.go:221] Checking for updates...
	I1207 23:36:30.208549  677704 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:36:30.209775  677704 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:36:30.214561  677704 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:36:30.215983  677704 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:36:30.217521  677704 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:36:30.219339  677704 config.go:182] Loaded profile config "newest-cni-858719": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:36:30.220075  677704 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:36:30.244737  677704 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:36:30.244935  677704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:36:30.311650  677704 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-07 23:36:30.299453318 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:36:30.311817  677704 docker.go:319] overlay module found
	I1207 23:36:30.315570  677704 out.go:179] * Using the docker driver based on existing profile
	I1207 23:36:30.317497  677704 start.go:309] selected driver: docker
	I1207 23:36:30.317524  677704 start.go:927] validating driver "docker" against &{Name:newest-cni-858719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:36:30.317669  677704 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:36:30.318487  677704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:36:30.399830  677704 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-07 23:36:30.383304383 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:36:30.401873  677704 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1207 23:36:30.401972  677704 cni.go:84] Creating CNI manager for ""
	I1207 23:36:30.402072  677704 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:36:30.402132  677704 start.go:353] cluster config:
	{Name:newest-cni-858719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:36:30.404155  677704 out.go:179] * Starting "newest-cni-858719" primary control-plane node in "newest-cni-858719" cluster
	I1207 23:36:30.405367  677704 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:36:30.406789  677704 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:36:30.408087  677704 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1207 23:36:30.408131  677704 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1207 23:36:30.408146  677704 cache.go:65] Caching tarball of preloaded images
	I1207 23:36:30.408265  677704 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:36:30.408277  677704 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1207 23:36:30.408426  677704 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/config.json ...
	I1207 23:36:30.408463  677704 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:36:30.446133  677704 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:36:30.446161  677704 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:36:30.446179  677704 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:36:30.446224  677704 start.go:360] acquireMachinesLock for newest-cni-858719: {Name:mk3f9783a06cd72eff911e9615fc59e854b06695 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:36:30.446291  677704 start.go:364] duration metric: took 37.32µs to acquireMachinesLock for "newest-cni-858719"
	I1207 23:36:30.446316  677704 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:36:30.446340  677704 fix.go:54] fixHost starting: 
	I1207 23:36:30.446637  677704 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:30.475469  677704 fix.go:112] recreateIfNeeded on newest-cni-858719: state=Stopped err=<nil>
	W1207 23:36:30.475505  677704 fix.go:138] unexpected machine state, will restart: <nil>
	I1207 23:36:30.014058  673565 start.go:296] duration metric: took 177.314443ms for postStartSetup
	I1207 23:36:30.014519  673565 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-600852
	I1207 23:36:30.038610  673565 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/config.json ...
	I1207 23:36:30.038964  673565 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:36:30.039016  673565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-600852
	I1207 23:36:30.065777  673565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/auto-600852/id_rsa Username:docker}
	I1207 23:36:30.162043  673565 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:36:30.167397  673565 start.go:128] duration metric: took 9.888881461s to createHost
	I1207 23:36:30.167425  673565 start.go:83] releasing machines lock for "auto-600852", held for 9.889029296s
	I1207 23:36:30.167504  673565 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-600852
	I1207 23:36:30.187852  673565 ssh_runner.go:195] Run: cat /version.json
	I1207 23:36:30.187898  673565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-600852
	I1207 23:36:30.187900  673565 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:36:30.187983  673565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-600852
	I1207 23:36:30.209183  673565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/auto-600852/id_rsa Username:docker}
	I1207 23:36:30.209573  673565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/auto-600852/id_rsa Username:docker}
	I1207 23:36:30.375401  673565 ssh_runner.go:195] Run: systemctl --version
	I1207 23:36:30.387970  673565 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:36:30.451937  673565 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:36:30.463288  673565 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:36:30.463383  673565 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:36:30.500519  673565 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 23:36:30.500548  673565 start.go:496] detecting cgroup driver to use...
	I1207 23:36:30.500586  673565 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:36:30.500644  673565 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:36:30.523553  673565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:36:30.542090  673565 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:36:30.542193  673565 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:36:30.562685  673565 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:36:30.590093  673565 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:36:30.714368  673565 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:36:30.834479  673565 docker.go:234] disabling docker service ...
	I1207 23:36:30.834549  673565 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:36:30.869941  673565 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:36:30.894568  673565 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:36:31.002667  673565 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:36:31.119924  673565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:36:31.142153  673565 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:36:31.163106  673565 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:36:31.163177  673565 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:31.174874  673565 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:36:31.174957  673565 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:31.186962  673565 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:31.197787  673565 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:31.208567  673565 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:36:31.217977  673565 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:31.228985  673565 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:31.243864  673565 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:31.253438  673565 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:36:31.261577  673565 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:36:31.269437  673565 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:36:31.349977  673565 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:36:31.501537  673565 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:36:31.501610  673565 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:36:31.507081  673565 start.go:564] Will wait 60s for crictl version
	I1207 23:36:31.507153  673565 ssh_runner.go:195] Run: which crictl
	I1207 23:36:31.511425  673565 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:36:31.539351  673565 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:36:31.539441  673565 ssh_runner.go:195] Run: crio --version
	I1207 23:36:31.569558  673565 ssh_runner.go:195] Run: crio --version
	I1207 23:36:31.600664  673565 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1207 23:36:30.349629  663227 node_ready.go:57] node "default-k8s-diff-port-312944" has "Ready":"False" status (will retry)
	I1207 23:36:30.849610  663227 node_ready.go:49] node "default-k8s-diff-port-312944" is "Ready"
	I1207 23:36:30.849651  663227 node_ready.go:38] duration metric: took 11.006384498s for node "default-k8s-diff-port-312944" to be "Ready" ...
	I1207 23:36:30.849671  663227 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:36:30.849731  663227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:36:30.872873  663227 api_server.go:72] duration metric: took 11.455368709s to wait for apiserver process to appear ...
	I1207 23:36:30.873121  663227 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:36:30.873147  663227 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1207 23:36:30.882134  663227 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1207 23:36:30.883434  663227 api_server.go:141] control plane version: v1.34.2
	I1207 23:36:30.883472  663227 api_server.go:131] duration metric: took 10.341551ms to wait for apiserver health ...
	I1207 23:36:30.883493  663227 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:36:30.888989  663227 system_pods.go:59] 8 kube-system pods found
	I1207 23:36:30.889030  663227 system_pods.go:61] "coredns-66bc5c9577-p4v2f" [113d6978-708b-4941-acbc-0fa4a639f318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:36:30.889038  663227 system_pods.go:61] "etcd-default-k8s-diff-port-312944" [569e31ea-e77d-4156-a9f2-0970afca17bd] Running
	I1207 23:36:30.889046  663227 system_pods.go:61] "kindnet-55xbl" [627ffd8d-a2eb-4d9c-b1bc-a71f609273bc] Running
	I1207 23:36:30.889052  663227 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-312944" [a2d3f5cd-a118-448c-a233-a6fe616b5b6d] Running
	I1207 23:36:30.889058  663227 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-312944" [b5eaf61f-ba8d-4d44-8f2c-eb9ebae5e285] Running
	I1207 23:36:30.889063  663227 system_pods.go:61] "kube-proxy-7stg5" [b7e00d0a-bd16-45c1-a58c-e0569a0bcb33] Running
	I1207 23:36:30.889069  663227 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-312944" [ddd21134-7272-4134-8cc5-5fd8abb6abf5] Running
	I1207 23:36:30.889076  663227 system_pods.go:61] "storage-provisioner" [adffbdc2-708d-4f45-9f91-1697332156e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:36:30.889086  663227 system_pods.go:74] duration metric: took 5.585227ms to wait for pod list to return data ...
	I1207 23:36:30.889097  663227 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:36:30.892279  663227 default_sa.go:45] found service account: "default"
	I1207 23:36:30.892306  663227 default_sa.go:55] duration metric: took 3.201148ms for default service account to be created ...
	I1207 23:36:30.892318  663227 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 23:36:30.896636  663227 system_pods.go:86] 8 kube-system pods found
	I1207 23:36:30.896687  663227 system_pods.go:89] "coredns-66bc5c9577-p4v2f" [113d6978-708b-4941-acbc-0fa4a639f318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:36:30.896696  663227 system_pods.go:89] "etcd-default-k8s-diff-port-312944" [569e31ea-e77d-4156-a9f2-0970afca17bd] Running
	I1207 23:36:30.896704  663227 system_pods.go:89] "kindnet-55xbl" [627ffd8d-a2eb-4d9c-b1bc-a71f609273bc] Running
	I1207 23:36:30.896710  663227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-312944" [a2d3f5cd-a118-448c-a233-a6fe616b5b6d] Running
	I1207 23:36:30.896735  663227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-312944" [b5eaf61f-ba8d-4d44-8f2c-eb9ebae5e285] Running
	I1207 23:36:30.896745  663227 system_pods.go:89] "kube-proxy-7stg5" [b7e00d0a-bd16-45c1-a58c-e0569a0bcb33] Running
	I1207 23:36:30.896751  663227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-312944" [ddd21134-7272-4134-8cc5-5fd8abb6abf5] Running
	I1207 23:36:30.896758  663227 system_pods.go:89] "storage-provisioner" [adffbdc2-708d-4f45-9f91-1697332156e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:36:30.896786  663227 retry.go:31] will retry after 222.292044ms: missing components: kube-dns
	I1207 23:36:31.126979  663227 system_pods.go:86] 8 kube-system pods found
	I1207 23:36:31.127080  663227 system_pods.go:89] "coredns-66bc5c9577-p4v2f" [113d6978-708b-4941-acbc-0fa4a639f318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:36:31.127100  663227 system_pods.go:89] "etcd-default-k8s-diff-port-312944" [569e31ea-e77d-4156-a9f2-0970afca17bd] Running
	I1207 23:36:31.127109  663227 system_pods.go:89] "kindnet-55xbl" [627ffd8d-a2eb-4d9c-b1bc-a71f609273bc] Running
	I1207 23:36:31.127120  663227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-312944" [a2d3f5cd-a118-448c-a233-a6fe616b5b6d] Running
	I1207 23:36:31.127129  663227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-312944" [b5eaf61f-ba8d-4d44-8f2c-eb9ebae5e285] Running
	I1207 23:36:31.127135  663227 system_pods.go:89] "kube-proxy-7stg5" [b7e00d0a-bd16-45c1-a58c-e0569a0bcb33] Running
	I1207 23:36:31.127139  663227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-312944" [ddd21134-7272-4134-8cc5-5fd8abb6abf5] Running
	I1207 23:36:31.127147  663227 system_pods.go:89] "storage-provisioner" [adffbdc2-708d-4f45-9f91-1697332156e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:36:31.127169  663227 retry.go:31] will retry after 307.291664ms: missing components: kube-dns
	I1207 23:36:31.440222  663227 system_pods.go:86] 8 kube-system pods found
	I1207 23:36:31.440265  663227 system_pods.go:89] "coredns-66bc5c9577-p4v2f" [113d6978-708b-4941-acbc-0fa4a639f318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:36:31.440273  663227 system_pods.go:89] "etcd-default-k8s-diff-port-312944" [569e31ea-e77d-4156-a9f2-0970afca17bd] Running
	I1207 23:36:31.440283  663227 system_pods.go:89] "kindnet-55xbl" [627ffd8d-a2eb-4d9c-b1bc-a71f609273bc] Running
	I1207 23:36:31.440290  663227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-312944" [a2d3f5cd-a118-448c-a233-a6fe616b5b6d] Running
	I1207 23:36:31.440295  663227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-312944" [b5eaf61f-ba8d-4d44-8f2c-eb9ebae5e285] Running
	I1207 23:36:31.440302  663227 system_pods.go:89] "kube-proxy-7stg5" [b7e00d0a-bd16-45c1-a58c-e0569a0bcb33] Running
	I1207 23:36:31.440307  663227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-312944" [ddd21134-7272-4134-8cc5-5fd8abb6abf5] Running
	I1207 23:36:31.440314  663227 system_pods.go:89] "storage-provisioner" [adffbdc2-708d-4f45-9f91-1697332156e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:36:31.440354  663227 retry.go:31] will retry after 426.001876ms: missing components: kube-dns
	I1207 23:36:31.871913  663227 system_pods.go:86] 8 kube-system pods found
	I1207 23:36:31.871946  663227 system_pods.go:89] "coredns-66bc5c9577-p4v2f" [113d6978-708b-4941-acbc-0fa4a639f318] Running
	I1207 23:36:31.871953  663227 system_pods.go:89] "etcd-default-k8s-diff-port-312944" [569e31ea-e77d-4156-a9f2-0970afca17bd] Running
	I1207 23:36:31.871957  663227 system_pods.go:89] "kindnet-55xbl" [627ffd8d-a2eb-4d9c-b1bc-a71f609273bc] Running
	I1207 23:36:31.871961  663227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-312944" [a2d3f5cd-a118-448c-a233-a6fe616b5b6d] Running
	I1207 23:36:31.871968  663227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-312944" [b5eaf61f-ba8d-4d44-8f2c-eb9ebae5e285] Running
	I1207 23:36:31.871973  663227 system_pods.go:89] "kube-proxy-7stg5" [b7e00d0a-bd16-45c1-a58c-e0569a0bcb33] Running
	I1207 23:36:31.871978  663227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-312944" [ddd21134-7272-4134-8cc5-5fd8abb6abf5] Running
	I1207 23:36:31.871982  663227 system_pods.go:89] "storage-provisioner" [adffbdc2-708d-4f45-9f91-1697332156e3] Running
	I1207 23:36:31.871993  663227 system_pods.go:126] duration metric: took 979.653637ms to wait for k8s-apps to be running ...
	I1207 23:36:31.872008  663227 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:36:31.872059  663227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:36:31.886268  663227 system_svc.go:56] duration metric: took 14.248421ms WaitForService to wait for kubelet
	I1207 23:36:31.886301  663227 kubeadm.go:587] duration metric: took 12.468803502s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:36:31.886319  663227 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:36:31.889484  663227 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:36:31.889518  663227 node_conditions.go:123] node cpu capacity is 8
	I1207 23:36:31.889536  663227 node_conditions.go:105] duration metric: took 3.211978ms to run NodePressure ...
	I1207 23:36:31.889549  663227 start.go:242] waiting for startup goroutines ...
	I1207 23:36:31.889557  663227 start.go:247] waiting for cluster config update ...
	I1207 23:36:31.889567  663227 start.go:256] writing updated cluster config ...
	I1207 23:36:31.889825  663227 ssh_runner.go:195] Run: rm -f paused
	I1207 23:36:31.893873  663227 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:36:31.900462  663227 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p4v2f" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:31.905254  663227 pod_ready.go:94] pod "coredns-66bc5c9577-p4v2f" is "Ready"
	I1207 23:36:31.905281  663227 pod_ready.go:86] duration metric: took 4.791855ms for pod "coredns-66bc5c9577-p4v2f" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:31.908065  663227 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:31.913118  663227 pod_ready.go:94] pod "etcd-default-k8s-diff-port-312944" is "Ready"
	I1207 23:36:31.913140  663227 pod_ready.go:86] duration metric: took 5.030101ms for pod "etcd-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:31.914938  663227 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:31.918718  663227 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-312944" is "Ready"
	I1207 23:36:31.918742  663227 pod_ready.go:86] duration metric: took 3.786001ms for pod "kube-apiserver-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:31.920411  663227 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:32.299220  663227 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-312944" is "Ready"
	I1207 23:36:32.299254  663227 pod_ready.go:86] duration metric: took 378.816082ms for pod "kube-controller-manager-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:32.498428  663227 pod_ready.go:83] waiting for pod "kube-proxy-7stg5" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:32.898763  663227 pod_ready.go:94] pod "kube-proxy-7stg5" is "Ready"
	I1207 23:36:32.898796  663227 pod_ready.go:86] duration metric: took 400.341199ms for pod "kube-proxy-7stg5" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:33.099537  663227 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:33.499044  663227 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-312944" is "Ready"
	I1207 23:36:33.499080  663227 pod_ready.go:86] duration metric: took 399.514812ms for pod "kube-scheduler-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:33.499097  663227 pod_ready.go:40] duration metric: took 1.605186446s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:36:33.554736  663227 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1207 23:36:33.556778  663227 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-312944" cluster and "default" namespace by default
	I1207 23:36:30.017812  673247 addons.go:530] duration metric: took 2.349839549s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1207 23:36:30.500546  673247 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1207 23:36:30.506524  673247 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1207 23:36:30.506554  673247 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1207 23:36:30.999819  673247 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1207 23:36:31.005585  673247 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1207 23:36:31.006742  673247 api_server.go:141] control plane version: v1.34.2
	I1207 23:36:31.006775  673247 api_server.go:131] duration metric: took 1.007113458s to wait for apiserver health ...
	I1207 23:36:31.006788  673247 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:36:31.011558  673247 system_pods.go:59] 8 kube-system pods found
	I1207 23:36:31.011611  673247 system_pods.go:61] "coredns-66bc5c9577-wvgqf" [80c1683b-a66c-4dd4-8d91-0e5cc2bd5e18] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:36:31.011624  673247 system_pods.go:61] "etcd-embed-certs-654118" [b79ec937-fed7-4df6-9a57-24d6513402e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:36:31.011635  673247 system_pods.go:61] "kindnet-68q87" [7fc0d1b0-080b-4e1c-b7b4-cd23aa94620a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1207 23:36:31.011645  673247 system_pods.go:61] "kube-apiserver-embed-certs-654118" [f6fab7ae-3dd9-48d2-8b83-9f72e33bbee1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:36:31.011655  673247 system_pods.go:61] "kube-controller-manager-embed-certs-654118" [9748b389-d642-4475-bc81-39199511f4d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:36:31.011664  673247 system_pods.go:61] "kube-proxy-l75b2" [2f061a54-3641-473d-9c6a-77e51062e955] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 23:36:31.011671  673247 system_pods.go:61] "kube-scheduler-embed-certs-654118" [eb585812-9353-43b0-a610-34f3fcb6d32f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:36:31.011678  673247 system_pods.go:61] "storage-provisioner" [34685d0c-67b3-4683-b817-772fa2ef1c77] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:36:31.011701  673247 system_pods.go:74] duration metric: took 4.903872ms to wait for pod list to return data ...
	I1207 23:36:31.011712  673247 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:36:31.014761  673247 default_sa.go:45] found service account: "default"
	I1207 23:36:31.014791  673247 default_sa.go:55] duration metric: took 3.070892ms for default service account to be created ...
	I1207 23:36:31.014804  673247 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 23:36:31.018030  673247 system_pods.go:86] 8 kube-system pods found
	I1207 23:36:31.018077  673247 system_pods.go:89] "coredns-66bc5c9577-wvgqf" [80c1683b-a66c-4dd4-8d91-0e5cc2bd5e18] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:36:31.018089  673247 system_pods.go:89] "etcd-embed-certs-654118" [b79ec937-fed7-4df6-9a57-24d6513402e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:36:31.018098  673247 system_pods.go:89] "kindnet-68q87" [7fc0d1b0-080b-4e1c-b7b4-cd23aa94620a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1207 23:36:31.018106  673247 system_pods.go:89] "kube-apiserver-embed-certs-654118" [f6fab7ae-3dd9-48d2-8b83-9f72e33bbee1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:36:31.018121  673247 system_pods.go:89] "kube-controller-manager-embed-certs-654118" [9748b389-d642-4475-bc81-39199511f4d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:36:31.018134  673247 system_pods.go:89] "kube-proxy-l75b2" [2f061a54-3641-473d-9c6a-77e51062e955] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 23:36:31.018142  673247 system_pods.go:89] "kube-scheduler-embed-certs-654118" [eb585812-9353-43b0-a610-34f3fcb6d32f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:36:31.018148  673247 system_pods.go:89] "storage-provisioner" [34685d0c-67b3-4683-b817-772fa2ef1c77] Running
	I1207 23:36:31.018164  673247 system_pods.go:126] duration metric: took 3.352378ms to wait for k8s-apps to be running ...
	I1207 23:36:31.018176  673247 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:36:31.018232  673247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:36:31.034999  673247 system_svc.go:56] duration metric: took 16.811304ms WaitForService to wait for kubelet
	I1207 23:36:31.035038  673247 kubeadm.go:587] duration metric: took 3.36708951s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:36:31.035063  673247 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:36:31.037964  673247 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:36:31.037997  673247 node_conditions.go:123] node cpu capacity is 8
	I1207 23:36:31.038017  673247 node_conditions.go:105] duration metric: took 2.947717ms to run NodePressure ...
	I1207 23:36:31.038038  673247 start.go:242] waiting for startup goroutines ...
	I1207 23:36:31.038047  673247 start.go:247] waiting for cluster config update ...
	I1207 23:36:31.038060  673247 start.go:256] writing updated cluster config ...
	I1207 23:36:31.038388  673247 ssh_runner.go:195] Run: rm -f paused
	I1207 23:36:31.045933  673247 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:36:31.051360  673247 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wvgqf" in "kube-system" namespace to be "Ready" or be gone ...
	W1207 23:36:33.056839  673247 pod_ready.go:104] pod "coredns-66bc5c9577-wvgqf" is not "Ready", error: <nil>
	I1207 23:36:31.601878  673565 cli_runner.go:164] Run: docker network inspect auto-600852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:36:31.621720  673565 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1207 23:36:31.626504  673565 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:36:31.638820  673565 kubeadm.go:884] updating cluster {Name:auto-600852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:36:31.638979  673565 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:36:31.639045  673565 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:36:31.671512  673565 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:36:31.671537  673565 crio.go:433] Images already preloaded, skipping extraction
	I1207 23:36:31.671584  673565 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:36:31.698600  673565 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:36:31.698621  673565 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:36:31.698629  673565 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1207 23:36:31.698758  673565 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-600852 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:auto-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:36:31.698849  673565 ssh_runner.go:195] Run: crio config
	I1207 23:36:31.748038  673565 cni.go:84] Creating CNI manager for ""
	I1207 23:36:31.748064  673565 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:36:31.748082  673565 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 23:36:31.748110  673565 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-600852 NodeName:auto-600852 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:36:31.748274  673565 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-600852"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:36:31.748395  673565 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:36:31.757145  673565 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:36:31.757219  673565 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 23:36:31.766099  673565 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1207 23:36:31.779629  673565 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:36:31.800018  673565 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1207 23:36:31.817264  673565 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1207 23:36:31.822473  673565 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:36:31.834622  673565 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:36:31.928227  673565 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:36:31.958251  673565 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852 for IP: 192.168.85.2
	I1207 23:36:31.958272  673565 certs.go:195] generating shared ca certs ...
	I1207 23:36:31.958288  673565 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:31.958457  673565 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:36:31.958513  673565 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:36:31.958523  673565 certs.go:257] generating profile certs ...
	I1207 23:36:31.958577  673565 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/client.key
	I1207 23:36:31.958592  673565 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/client.crt with IP's: []
	I1207 23:36:32.182791  673565 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/client.crt ...
	I1207 23:36:32.182826  673565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/client.crt: {Name:mkcb703f0f9e4b0a56f30bafc152e39ee98c32af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:32.183061  673565 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/client.key ...
	I1207 23:36:32.183086  673565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/client.key: {Name:mk33e4c8c1a1e58f23780f89a8c200357fe9af2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:32.183245  673565 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.key.5c32f241
	I1207 23:36:32.183269  673565 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.crt.5c32f241 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1207 23:36:32.472518  673565 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.crt.5c32f241 ...
	I1207 23:36:32.472552  673565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.crt.5c32f241: {Name:mkd72f567c38cb3b6e2eeb019eb8803d7c9b9ebc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:32.472743  673565 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.key.5c32f241 ...
	I1207 23:36:32.472756  673565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.key.5c32f241: {Name:mk6a31094374001ab612b14e9c18e5030a69691d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:32.472836  673565 certs.go:382] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.crt.5c32f241 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.crt
	I1207 23:36:32.472933  673565 certs.go:386] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.key.5c32f241 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.key
	I1207 23:36:32.472997  673565 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.key
	I1207 23:36:32.473022  673565 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.crt with IP's: []
	I1207 23:36:32.610842  673565 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.crt ...
	I1207 23:36:32.610871  673565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.crt: {Name:mkdfed3c317c9a9b5274d2282923661c521bedc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:32.611075  673565 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.key ...
	I1207 23:36:32.611096  673565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.key: {Name:mk38fd78995b6a1d76b48fda10f3d7ef0f5e91f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:32.611376  673565 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:36:32.611433  673565 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:36:32.611449  673565 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:36:32.611509  673565 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:36:32.611544  673565 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:36:32.611577  673565 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:36:32.611637  673565 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:36:32.612219  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:36:32.631785  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:36:32.651000  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:36:32.670569  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:36:32.690024  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1207 23:36:32.708926  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 23:36:32.727240  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:36:32.751398  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 23:36:32.776129  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:36:32.799218  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:36:32.818906  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:36:32.839578  673565 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:36:32.853944  673565 ssh_runner.go:195] Run: openssl version
	I1207 23:36:32.860417  673565 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:32.869087  673565 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:36:32.877433  673565 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:32.881465  673565 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:32.881547  673565 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:32.920658  673565 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:36:32.928919  673565 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1207 23:36:32.937680  673565 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:36:32.945804  673565 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:36:32.955606  673565 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:36:32.959865  673565 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:36:32.959922  673565 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:36:32.996040  673565 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:36:33.004381  673565 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/393125.pem /etc/ssl/certs/51391683.0
	I1207 23:36:33.012360  673565 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:36:33.020201  673565 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:36:33.028224  673565 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:36:33.032626  673565 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:36:33.032716  673565 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:36:33.069017  673565 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:36:33.078318  673565 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3931252.pem /etc/ssl/certs/3ec20f2e.0
	I1207 23:36:33.086473  673565 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:36:33.090434  673565 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1207 23:36:33.090491  673565 kubeadm.go:401] StartCluster: {Name:auto-600852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:36:33.090588  673565 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:36:33.090632  673565 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:36:33.118539  673565 cri.go:89] found id: ""
	I1207 23:36:33.118605  673565 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:36:33.127222  673565 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 23:36:33.135780  673565 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1207 23:36:33.135833  673565 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 23:36:33.144151  673565 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 23:36:33.144172  673565 kubeadm.go:158] found existing configuration files:
	
	I1207 23:36:33.144215  673565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1207 23:36:33.152854  673565 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1207 23:36:33.152928  673565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1207 23:36:33.160896  673565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1207 23:36:33.168822  673565 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1207 23:36:33.168877  673565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1207 23:36:33.176284  673565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1207 23:36:33.184383  673565 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1207 23:36:33.184442  673565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1207 23:36:33.193714  673565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1207 23:36:33.202016  673565 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1207 23:36:33.202077  673565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1207 23:36:33.210129  673565 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1207 23:36:33.271747  673565 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1207 23:36:33.334835  673565 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 23:36:30.477140  677704 out.go:252] * Restarting existing docker container for "newest-cni-858719" ...
	I1207 23:36:30.477215  677704 cli_runner.go:164] Run: docker start newest-cni-858719
	I1207 23:36:30.809394  677704 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:30.836380  677704 kic.go:430] container "newest-cni-858719" state is running.
	I1207 23:36:30.836921  677704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-858719
	I1207 23:36:30.866477  677704 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/config.json ...
	I1207 23:36:30.866809  677704 machine.go:94] provisionDockerMachine start ...
	I1207 23:36:30.866882  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:30.898514  677704 main.go:143] libmachine: Using SSH client type: native
	I1207 23:36:30.898872  677704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1207 23:36:30.898893  677704 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:36:30.899781  677704 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50554->127.0.0.1:33473: read: connection reset by peer
	I1207 23:36:34.032697  677704 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-858719
	
	I1207 23:36:34.032735  677704 ubuntu.go:182] provisioning hostname "newest-cni-858719"
	I1207 23:36:34.032802  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:34.054768  677704 main.go:143] libmachine: Using SSH client type: native
	I1207 23:36:34.055076  677704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1207 23:36:34.055103  677704 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-858719 && echo "newest-cni-858719" | sudo tee /etc/hostname
	I1207 23:36:34.201076  677704 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-858719
	
	I1207 23:36:34.201188  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:34.220957  677704 main.go:143] libmachine: Using SSH client type: native
	I1207 23:36:34.221305  677704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1207 23:36:34.221350  677704 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-858719' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-858719/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-858719' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:36:34.354180  677704 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:36:34.354212  677704 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:36:34.354255  677704 ubuntu.go:190] setting up certificates
	I1207 23:36:34.354268  677704 provision.go:84] configureAuth start
	I1207 23:36:34.354381  677704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-858719
	I1207 23:36:34.372396  677704 provision.go:143] copyHostCerts
	I1207 23:36:34.372463  677704 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:36:34.372474  677704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:36:34.372543  677704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:36:34.372653  677704 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:36:34.372662  677704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:36:34.372691  677704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:36:34.372767  677704 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:36:34.372775  677704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:36:34.372800  677704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:36:34.372863  677704 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.newest-cni-858719 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-858719]
	I1207 23:36:34.438526  677704 provision.go:177] copyRemoteCerts
	I1207 23:36:34.438610  677704 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:36:34.438661  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:34.457056  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:34.550753  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:36:34.569684  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1207 23:36:34.587851  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 23:36:34.605253  677704 provision.go:87] duration metric: took 250.964673ms to configureAuth
	I1207 23:36:34.605281  677704 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:36:34.605478  677704 config.go:182] Loaded profile config "newest-cni-858719": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:36:34.605592  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:34.623964  677704 main.go:143] libmachine: Using SSH client type: native
	I1207 23:36:34.624277  677704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1207 23:36:34.624303  677704 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:36:34.919543  677704 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:36:34.919573  677704 machine.go:97] duration metric: took 4.052749993s to provisionDockerMachine
	I1207 23:36:34.919588  677704 start.go:293] postStartSetup for "newest-cni-858719" (driver="docker")
	I1207 23:36:34.919604  677704 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:36:34.919670  677704 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:36:34.919713  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:34.940317  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:35.042131  677704 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:36:35.047382  677704 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:36:35.047431  677704 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:36:35.047446  677704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:36:35.047504  677704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:36:35.047605  677704 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:36:35.047744  677704 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:36:35.059463  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:36:35.084378  677704 start.go:296] duration metric: took 164.724573ms for postStartSetup
	I1207 23:36:35.084483  677704 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:36:35.084536  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:35.108317  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:35.212214  677704 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:36:35.219280  677704 fix.go:56] duration metric: took 4.772929293s for fixHost
	I1207 23:36:35.219313  677704 start.go:83] releasing machines lock for "newest-cni-858719", held for 4.773005701s
	I1207 23:36:35.219452  677704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-858719
	I1207 23:36:35.245630  677704 ssh_runner.go:195] Run: cat /version.json
	I1207 23:36:35.245689  677704 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:36:35.245694  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:35.245779  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:35.270514  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:35.270842  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:35.457960  677704 ssh_runner.go:195] Run: systemctl --version
	I1207 23:36:35.466947  677704 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:36:35.513529  677704 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:36:35.519931  677704 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:36:35.520007  677704 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:36:35.531091  677704 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 23:36:35.531122  677704 start.go:496] detecting cgroup driver to use...
	I1207 23:36:35.531158  677704 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:36:35.531220  677704 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:36:35.552715  677704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:36:35.570570  677704 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:36:35.570644  677704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:36:35.591911  677704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:36:35.609216  677704 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:36:35.730291  677704 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:36:35.849860  677704 docker.go:234] disabling docker service ...
	I1207 23:36:35.849939  677704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:36:35.870164  677704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:36:35.887316  677704 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:36:36.010320  677704 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:36:36.134166  677704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:36:36.151763  677704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:36:36.171658  677704 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:36:36.171724  677704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:36.185507  677704 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:36:36.185577  677704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:36.199807  677704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:36.212561  677704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:36.224857  677704 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:36:36.236376  677704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:36.248851  677704 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:36.260134  677704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:36.271388  677704 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:36:36.282450  677704 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:36:36.292401  677704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:36:36.402590  677704 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:36:36.781588  677704 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:36:36.781654  677704 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:36:36.787090  677704 start.go:564] Will wait 60s for crictl version
	I1207 23:36:36.787149  677704 ssh_runner.go:195] Run: which crictl
	I1207 23:36:36.792213  677704 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:36:36.824404  677704 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:36:36.824506  677704 ssh_runner.go:195] Run: crio --version
	I1207 23:36:36.862950  677704 ssh_runner.go:195] Run: crio --version
	I1207 23:36:36.905770  677704 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1207 23:36:36.907106  677704 cli_runner.go:164] Run: docker network inspect newest-cni-858719 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:36:36.931941  677704 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1207 23:36:36.937364  677704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:36:36.953376  677704 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1207 23:36:36.954739  677704 kubeadm.go:884] updating cluster {Name:newest-cni-858719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:36:36.954910  677704 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1207 23:36:36.954978  677704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:36:37.001232  677704 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:36:37.001289  677704 crio.go:433] Images already preloaded, skipping extraction
	I1207 23:36:37.001372  677704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:36:37.035868  677704 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:36:37.035911  677704 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:36:37.035920  677704 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1207 23:36:37.036047  677704 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-858719 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:36:37.036135  677704 ssh_runner.go:195] Run: crio config
	I1207 23:36:37.100859  677704 cni.go:84] Creating CNI manager for ""
	I1207 23:36:37.100891  677704 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:36:37.100916  677704 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1207 23:36:37.100949  677704 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-858719 NodeName:newest-cni-858719 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:36:37.101134  677704 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-858719"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:36:37.101225  677704 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1207 23:36:37.112723  677704 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:36:37.112803  677704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 23:36:37.124443  677704 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1207 23:36:37.142815  677704 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1207 23:36:37.160115  677704 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1207 23:36:37.177233  677704 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1207 23:36:37.182248  677704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:36:37.195883  677704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:36:37.321978  677704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:36:37.349434  677704 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719 for IP: 192.168.76.2
	I1207 23:36:37.349460  677704 certs.go:195] generating shared ca certs ...
	I1207 23:36:37.349483  677704 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:37.349673  677704 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:36:37.349732  677704 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:36:37.349742  677704 certs.go:257] generating profile certs ...
	I1207 23:36:37.349907  677704 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/client.key
	I1207 23:36:37.349978  677704 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.key.81fe4363
	I1207 23:36:37.350036  677704 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.key
	I1207 23:36:37.350178  677704 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:36:37.350217  677704 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:36:37.350228  677704 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:36:37.350264  677704 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:36:37.350296  677704 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:36:37.350347  677704 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:36:37.350407  677704 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:36:37.351226  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:36:37.377735  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:36:37.403808  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:36:37.427723  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:36:37.460810  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1207 23:36:37.487067  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 23:36:37.513861  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:36:37.539259  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 23:36:37.565376  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:36:37.592124  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:36:37.619212  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:36:37.647272  677704 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:36:37.667351  677704 ssh_runner.go:195] Run: openssl version
	I1207 23:36:37.676513  677704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:36:37.687971  677704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:36:37.699159  677704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:36:37.704977  677704 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:36:37.705049  677704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:36:37.765716  677704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:36:37.779131  677704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:36:37.793745  677704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:36:37.805547  677704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:36:37.811144  677704 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:36:37.811212  677704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:36:37.854651  677704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:36:37.863269  677704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:37.872157  677704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:36:37.881013  677704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:37.886652  677704 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:37.886726  677704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:37.925060  677704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:36:37.933601  677704 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:36:37.937936  677704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 23:36:37.974013  677704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 23:36:38.011069  677704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 23:36:38.048975  677704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 23:36:38.089220  677704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 23:36:38.126552  677704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 23:36:38.171830  677704 kubeadm.go:401] StartCluster: {Name:newest-cni-858719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:36:38.171932  677704 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:36:38.171998  677704 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:36:38.202873  677704 cri.go:89] found id: ""
	I1207 23:36:38.202948  677704 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:36:38.211787  677704 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1207 23:36:38.211805  677704 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1207 23:36:38.211858  677704 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 23:36:38.220804  677704 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:36:38.221673  677704 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-858719" does not appear in /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:36:38.222177  677704 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-389542/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-858719" cluster setting kubeconfig missing "newest-cni-858719" context setting]
	I1207 23:36:38.222947  677704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:38.242108  677704 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 23:36:38.251961  677704 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1207 23:36:38.251999  677704 kubeadm.go:602] duration metric: took 40.189524ms to restartPrimaryControlPlane
	I1207 23:36:38.252009  677704 kubeadm.go:403] duration metric: took 80.190889ms to StartCluster
	I1207 23:36:38.252030  677704 settings.go:142] acquiring lock: {Name:mk372e79badb9c8f25216fa891cff6dfa96ea2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:38.252111  677704 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:36:38.253734  677704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:38.296126  677704 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:36:38.296231  677704 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1207 23:36:38.296364  677704 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-858719"
	I1207 23:36:38.296391  677704 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-858719"
	I1207 23:36:38.296385  677704 addons.go:70] Setting dashboard=true in profile "newest-cni-858719"
	W1207 23:36:38.296403  677704 addons.go:248] addon storage-provisioner should already be in state true
	I1207 23:36:38.296420  677704 addons.go:239] Setting addon dashboard=true in "newest-cni-858719"
	W1207 23:36:38.296437  677704 addons.go:248] addon dashboard should already be in state true
	I1207 23:36:38.296445  677704 host.go:66] Checking if "newest-cni-858719" exists ...
	I1207 23:36:38.296432  677704 addons.go:70] Setting default-storageclass=true in profile "newest-cni-858719"
	I1207 23:36:38.296468  677704 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-858719"
	I1207 23:36:38.296475  677704 host.go:66] Checking if "newest-cni-858719" exists ...
	I1207 23:36:38.296480  677704 config.go:182] Loaded profile config "newest-cni-858719": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:36:38.296903  677704 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:38.296913  677704 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:38.296916  677704 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:38.304834  677704 out.go:179] * Verifying Kubernetes components...
	I1207 23:36:38.321121  677704 addons.go:239] Setting addon default-storageclass=true in "newest-cni-858719"
	W1207 23:36:38.321142  677704 addons.go:248] addon default-storageclass should already be in state true
	I1207 23:36:38.321167  677704 host.go:66] Checking if "newest-cni-858719" exists ...
	I1207 23:36:38.321502  677704 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:38.331788  677704 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1207 23:36:38.331860  677704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:36:38.331806  677704 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 23:36:38.339675  677704 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:36:38.339781  677704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 23:36:38.339832  677704 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1207 23:36:35.058792  673247 pod_ready.go:104] pod "coredns-66bc5c9577-wvgqf" is not "Ready", error: <nil>
	W1207 23:36:37.059360  673247 pod_ready.go:104] pod "coredns-66bc5c9577-wvgqf" is not "Ready", error: <nil>
	W1207 23:36:39.558825  673247 pod_ready.go:104] pod "coredns-66bc5c9577-wvgqf" is not "Ready", error: <nil>
	I1207 23:36:38.339851  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:38.340452  677704 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 23:36:38.340471  677704 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 23:36:38.340521  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:38.362068  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:38.362162  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:38.362941  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1207 23:36:38.362965  677704 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1207 23:36:38.363025  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:38.392574  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:38.462983  677704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:36:38.484595  677704 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:36:38.484756  677704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:36:38.486717  677704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 23:36:38.491481  677704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:36:38.510448  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1207 23:36:38.510515  677704 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1207 23:36:38.536570  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1207 23:36:38.536602  677704 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1207 23:36:38.566084  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1207 23:36:38.566115  677704 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1207 23:36:38.600942  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1207 23:36:38.600972  677704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1207 23:36:38.609165  677704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1207 23:36:38.609215  677704 retry.go:31] will retry after 211.51386ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1207 23:36:38.609284  677704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1207 23:36:38.609300  677704 retry.go:31] will retry after 303.789465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1207 23:36:38.623815  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1207 23:36:38.624079  677704 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1207 23:36:38.653443  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1207 23:36:38.653478  677704 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1207 23:36:38.678913  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1207 23:36:38.678945  677704 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1207 23:36:38.701578  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1207 23:36:38.701607  677704 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1207 23:36:38.720445  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1207 23:36:38.720502  677704 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1207 23:36:38.743195  677704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1207 23:36:38.821620  677704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1207 23:36:38.913583  677704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:36:38.985710  677704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:36:41.415564  677704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.593900667s)
	I1207 23:36:41.416832  677704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.673583567s)
	I1207 23:36:41.418467  677704 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-858719 addons enable metrics-server
	
	I1207 23:36:41.532720  677704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.619029892s)
	I1207 23:36:41.533073  677704 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.547330372s)
	I1207 23:36:41.533100  677704 api_server.go:72] duration metric: took 3.236908876s to wait for apiserver process to appear ...
	I1207 23:36:41.533107  677704 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:36:41.533129  677704 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:36:41.534688  677704 out.go:179] * Enabled addons: dashboard, default-storageclass, storage-provisioner
	I1207 23:36:41.535780  677704 addons.go:530] duration metric: took 3.239558186s for enable addons: enabled=[dashboard default-storageclass storage-provisioner]
	I1207 23:36:41.541555  677704 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1207 23:36:41.541584  677704 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1207 23:36:42.033193  677704 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:36:42.038840  677704 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1207 23:36:42.040044  677704 api_server.go:141] control plane version: v1.35.0-beta.0
	I1207 23:36:42.040086  677704 api_server.go:131] duration metric: took 506.968227ms to wait for apiserver health ...
	I1207 23:36:42.040100  677704 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:36:42.044016  677704 system_pods.go:59] 8 kube-system pods found
	I1207 23:36:42.044061  677704 system_pods.go:61] "coredns-7d764666f9-dp6qz" [1403dc21-d613-4225-bf80-faf8d23e774c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1207 23:36:42.044076  677704 system_pods.go:61] "etcd-newest-cni-858719" [58c61faa-719b-477c-8216-d9aaa8554cec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:36:42.044091  677704 system_pods.go:61] "kindnet-5zzk9" [b8e05261-d743-488e-9543-b60973ff09b4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1207 23:36:42.044103  677704 system_pods.go:61] "kube-apiserver-newest-cni-858719" [343d3191-d091-4436-a131-68718cb68508] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:36:42.044116  677704 system_pods.go:61] "kube-controller-manager-newest-cni-858719" [c2876dc8-1228-4980-bd43-1d58fcd760f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:36:42.044131  677704 system_pods.go:61] "kube-proxy-p8v8n" [494a11f1-086c-43f3-92e7-4b59d073c5f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 23:36:42.044143  677704 system_pods.go:61] "kube-scheduler-newest-cni-858719" [28d72586-76c3-4f37-b20e-0c7de9fe90ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:36:42.044153  677704 system_pods.go:61] "storage-provisioner" [a39abdef-8c48-494a-9bb1-645330622d99] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1207 23:36:42.044176  677704 system_pods.go:74] duration metric: took 4.066756ms to wait for pod list to return data ...
	I1207 23:36:42.044190  677704 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:36:42.047787  677704 default_sa.go:45] found service account: "default"
	I1207 23:36:42.047814  677704 default_sa.go:55] duration metric: took 3.616282ms for default service account to be created ...
	I1207 23:36:42.047828  677704 kubeadm.go:587] duration metric: took 3.751636263s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1207 23:36:42.047853  677704 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:36:42.051921  677704 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:36:42.051998  677704 node_conditions.go:123] node cpu capacity is 8
	I1207 23:36:42.052034  677704 node_conditions.go:105] duration metric: took 4.174035ms to run NodePressure ...
	I1207 23:36:42.052060  677704 start.go:242] waiting for startup goroutines ...
	I1207 23:36:42.052081  677704 start.go:247] waiting for cluster config update ...
	I1207 23:36:42.052105  677704 start.go:256] writing updated cluster config ...
	I1207 23:36:42.052449  677704 ssh_runner.go:195] Run: rm -f paused
	I1207 23:36:42.126816  677704 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1207 23:36:42.128432  677704 out.go:179] * Done! kubectl is now configured to use "newest-cni-858719" cluster and "default" namespace by default
	W1207 23:36:41.560993  673247 pod_ready.go:104] pod "coredns-66bc5c9577-wvgqf" is not "Ready", error: <nil>
	W1207 23:36:44.058276  673247 pod_ready.go:104] pod "coredns-66bc5c9577-wvgqf" is not "Ready", error: <nil>
	I1207 23:36:46.425201  673565 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1207 23:36:46.425289  673565 kubeadm.go:319] [preflight] Running pre-flight checks
	I1207 23:36:46.425503  673565 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1207 23:36:46.425591  673565 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1207 23:36:46.425648  673565 kubeadm.go:319] OS: Linux
	I1207 23:36:46.425726  673565 kubeadm.go:319] CGROUPS_CPU: enabled
	I1207 23:36:46.425779  673565 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1207 23:36:46.425823  673565 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1207 23:36:46.425881  673565 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1207 23:36:46.425943  673565 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1207 23:36:46.426027  673565 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1207 23:36:46.426082  673565 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1207 23:36:46.426157  673565 kubeadm.go:319] CGROUPS_IO: enabled
	I1207 23:36:46.426289  673565 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 23:36:46.426472  673565 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 23:36:46.426603  673565 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1207 23:36:46.426688  673565 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 23:36:46.428853  673565 out.go:252]   - Generating certificates and keys ...
	I1207 23:36:46.428934  673565 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1207 23:36:46.429022  673565 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1207 23:36:46.429144  673565 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 23:36:46.429238  673565 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1207 23:36:46.429378  673565 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1207 23:36:46.429443  673565 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1207 23:36:46.429494  673565 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1207 23:36:46.429599  673565 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-600852 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1207 23:36:46.429689  673565 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1207 23:36:46.429804  673565 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-600852 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1207 23:36:46.429865  673565 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 23:36:46.429965  673565 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 23:36:46.430015  673565 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1207 23:36:46.430084  673565 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 23:36:46.430149  673565 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 23:36:46.430227  673565 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 23:36:46.430290  673565 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 23:36:46.430398  673565 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 23:36:46.430482  673565 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 23:36:46.430582  673565 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 23:36:46.430650  673565 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 23:36:46.431853  673565 out.go:252]   - Booting up control plane ...
	I1207 23:36:46.431944  673565 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 23:36:46.432044  673565 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 23:36:46.432121  673565 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 23:36:46.432245  673565 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 23:36:46.432377  673565 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1207 23:36:46.432543  673565 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1207 23:36:46.432695  673565 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 23:36:46.432767  673565 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1207 23:36:46.432970  673565 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1207 23:36:46.433126  673565 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1207 23:36:46.433207  673565 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001238699s
	I1207 23:36:46.433351  673565 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1207 23:36:46.433477  673565 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1207 23:36:46.433618  673565 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1207 23:36:46.433729  673565 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1207 23:36:46.433846  673565 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.123454399s
	I1207 23:36:46.433973  673565 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.842224323s
	I1207 23:36:46.434079  673565 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502286975s
	I1207 23:36:46.434238  673565 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 23:36:46.434444  673565 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 23:36:46.434538  673565 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 23:36:46.434735  673565 kubeadm.go:319] [mark-control-plane] Marking the node auto-600852 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 23:36:46.434801  673565 kubeadm.go:319] [bootstrap-token] Using token: v0nhi0.hemeervra3k4j1st
	I1207 23:36:46.436898  673565 out.go:252]   - Configuring RBAC rules ...
	I1207 23:36:46.437073  673565 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 23:36:46.437196  673565 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 23:36:46.437434  673565 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 23:36:46.437579  673565 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 23:36:46.437733  673565 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 23:36:46.437876  673565 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 23:36:46.438037  673565 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 23:36:46.438123  673565 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1207 23:36:46.438194  673565 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1207 23:36:46.438203  673565 kubeadm.go:319] 
	I1207 23:36:46.438300  673565 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1207 23:36:46.438313  673565 kubeadm.go:319] 
	I1207 23:36:46.438451  673565 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1207 23:36:46.438461  673565 kubeadm.go:319] 
	I1207 23:36:46.438481  673565 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1207 23:36:46.438581  673565 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 23:36:46.438664  673565 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 23:36:46.438677  673565 kubeadm.go:319] 
	I1207 23:36:46.438750  673565 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1207 23:36:46.438760  673565 kubeadm.go:319] 
	I1207 23:36:46.438823  673565 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 23:36:46.438853  673565 kubeadm.go:319] 
	I1207 23:36:46.438931  673565 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1207 23:36:46.439053  673565 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 23:36:46.439162  673565 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 23:36:46.439172  673565 kubeadm.go:319] 
	I1207 23:36:46.439299  673565 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 23:36:46.439441  673565 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1207 23:36:46.439451  673565 kubeadm.go:319] 
	I1207 23:36:46.439567  673565 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token v0nhi0.hemeervra3k4j1st \
	I1207 23:36:46.439740  673565 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a6f9ffe32c21ad638ebba2743e15f014ccba55b6baef971adb92cbf8edf27a49 \
	I1207 23:36:46.439773  673565 kubeadm.go:319] 	--control-plane 
	I1207 23:36:46.439782  673565 kubeadm.go:319] 
	I1207 23:36:46.439908  673565 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1207 23:36:46.439920  673565 kubeadm.go:319] 
	I1207 23:36:46.440060  673565 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token v0nhi0.hemeervra3k4j1st \
	I1207 23:36:46.440244  673565 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a6f9ffe32c21ad638ebba2743e15f014ccba55b6baef971adb92cbf8edf27a49 
	I1207 23:36:46.440259  673565 cni.go:84] Creating CNI manager for ""
	I1207 23:36:46.440268  673565 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:36:46.441847  673565 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.747198973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.750789386Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1b886b1e-3a37-4b18-b7ef-ee93ded349aa name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.751173122Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=46cf6b0b-51d1-418d-8d21-a1d52b3ff0d9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.752844752Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.753614709Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.753691357Z" level=info msg="Ran pod sandbox f0cc146fbfa2a1f55885742a7303a3195bdbcee7f25ab7cee37c0298b36f7fb4 with infra container: kube-system/kube-proxy-p8v8n/POD" id=1b886b1e-3a37-4b18-b7ef-ee93ded349aa name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.754574862Z" level=info msg="Ran pod sandbox 8208f390c38e9dcdfcf1dfd3262eccb3135b1ec8cb4eb4f0e9b2b4f0efb64e68 with infra container: kube-system/kindnet-5zzk9/POD" id=46cf6b0b-51d1-418d-8d21-a1d52b3ff0d9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.755222239Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f34def23-74f3-478a-9146-9fa3c544446d name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.756077161Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=548a62cd-64e5-4c3a-940a-6218ab8fa99e name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.7562978Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=b863339c-9998-49c8-85be-69c1c3a2dea5 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.757619514Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ff33236a-fab2-4fcb-9795-4f69bfde768f name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.757733732Z" level=info msg="Creating container: kube-system/kube-proxy-p8v8n/kube-proxy" id=f275e99d-05a9-4dde-b458-dade5b2c408f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.757887357Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.759524416Z" level=info msg="Creating container: kube-system/kindnet-5zzk9/kindnet-cni" id=1c64a00c-405e-41ff-b845-6d525f9f5642 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.759615609Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.764618129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.765307363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.767144623Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.769037112Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.804715712Z" level=info msg="Created container cc1cd9bf7531e730eee0e48829fb2f2262509a9acb9a58a449d07c2908258bae: kube-system/kindnet-5zzk9/kindnet-cni" id=1c64a00c-405e-41ff-b845-6d525f9f5642 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.805879807Z" level=info msg="Starting container: cc1cd9bf7531e730eee0e48829fb2f2262509a9acb9a58a449d07c2908258bae" id=92081333-dcfa-413e-8df0-dc2b3200e29d name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.807915137Z" level=info msg="Created container b16beb4e4b195daeeefa06631cdab33892ab5de00e1eaa4f3d42a32591fc4c36: kube-system/kube-proxy-p8v8n/kube-proxy" id=f275e99d-05a9-4dde-b458-dade5b2c408f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.808423882Z" level=info msg="Started container" PID=1067 containerID=cc1cd9bf7531e730eee0e48829fb2f2262509a9acb9a58a449d07c2908258bae description=kube-system/kindnet-5zzk9/kindnet-cni id=92081333-dcfa-413e-8df0-dc2b3200e29d name=/runtime.v1.RuntimeService/StartContainer sandboxID=8208f390c38e9dcdfcf1dfd3262eccb3135b1ec8cb4eb4f0e9b2b4f0efb64e68
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.809147584Z" level=info msg="Starting container: b16beb4e4b195daeeefa06631cdab33892ab5de00e1eaa4f3d42a32591fc4c36" id=32407dc6-09a7-4323-93a3-569f2a1eca9d name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:36:41 newest-cni-858719 crio[521]: time="2025-12-07T23:36:41.812949113Z" level=info msg="Started container" PID=1068 containerID=b16beb4e4b195daeeefa06631cdab33892ab5de00e1eaa4f3d42a32591fc4c36 description=kube-system/kube-proxy-p8v8n/kube-proxy id=32407dc6-09a7-4323-93a3-569f2a1eca9d name=/runtime.v1.RuntimeService/StartContainer sandboxID=f0cc146fbfa2a1f55885742a7303a3195bdbcee7f25ab7cee37c0298b36f7fb4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	cc1cd9bf7531e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   8208f390c38e9       kindnet-5zzk9                               kube-system
	b16beb4e4b195       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   6 seconds ago       Running             kube-proxy                1                   f0cc146fbfa2a       kube-proxy-p8v8n                            kube-system
	1fde05929ea13       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   9 seconds ago       Running             kube-controller-manager   1                   89c40815e4472       kube-controller-manager-newest-cni-858719   kube-system
	09b2ae0a7c5b9       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   9 seconds ago       Running             etcd                      1                   313a36b7c28f9       etcd-newest-cni-858719                      kube-system
	20259f47f9c60       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   9 seconds ago       Running             kube-scheduler            1                   66e6af5a01eca       kube-scheduler-newest-cni-858719            kube-system
	60889310640bb       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   9 seconds ago       Running             kube-apiserver            1                   97a28c758cf7d       kube-apiserver-newest-cni-858719            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-858719
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-858719
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=newest-cni-858719
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_36_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:36:10 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-858719
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:36:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:36:40 +0000   Sun, 07 Dec 2025 23:36:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:36:40 +0000   Sun, 07 Dec 2025 23:36:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:36:40 +0000   Sun, 07 Dec 2025 23:36:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 07 Dec 2025 23:36:40 +0000   Sun, 07 Dec 2025 23:36:08 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-858719
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                2fe19260-c79d-4da0-b8eb-1e49571b8323
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-858719                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         36s
	  kube-system                 kindnet-5zzk9                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-newest-cni-858719             250m (3%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-newest-cni-858719    200m (2%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-p8v8n                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-newest-cni-858719             100m (1%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  32s   node-controller  Node newest-cni-858719 event: Registered Node newest-cni-858719 in Controller
	  Normal  RegisteredNode  5s    node-controller  Node newest-cni-858719 event: Registered Node newest-cni-858719 in Controller
	
	
	==> dmesg <==
	[  +0.006319] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.495443] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006323] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494714] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006745] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494455] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007157] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493953] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007413] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493695] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007143] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493798] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007702] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493076] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008458] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493060] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008891] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492811] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007996] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493243] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008588] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492559] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008931] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.491699] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.010378] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	
	
	==> etcd [09b2ae0a7c5b9e30441c564fc12ee45fca2591d70a3b0c4f829362d1f7b1c11c] <==
	{"level":"warn","ts":"2025-12-07T23:36:39.885101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.893280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.901964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.909898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.918210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.926951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.933804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.945516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.956317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.964116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.971294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.982312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.986484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:39.994580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:40.003959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:40.011043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:40.018731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:40.041796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:40.045822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:40.057905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:40.066450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:40.074513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:40.137833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54652","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-07T23:36:40.877544Z","caller":"traceutil/trace.go:172","msg":"trace[35235451] transaction","detail":"{read_only:false; number_of_response:1; response_revision:427; }","duration":"121.657656ms","start":"2025-12-07T23:36:40.755868Z","end":"2025-12-07T23:36:40.877525Z","steps":["trace[35235451] 'process raft request'  (duration: 121.043298ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-07T23:36:41.047639Z","caller":"traceutil/trace.go:172","msg":"trace[1342702817] transaction","detail":"{read_only:false; number_of_response:0; response_revision:433; }","duration":"121.561622ms","start":"2025-12-07T23:36:40.926036Z","end":"2025-12-07T23:36:41.047597Z","steps":["trace[1342702817] 'process raft request'  (duration: 96.242181ms)","trace[1342702817] 'compare'  (duration: 25.273928ms)"],"step_count":2}
	
	
	==> kernel <==
	 23:36:48 up  2:19,  0 user,  load average: 4.43, 2.86, 2.06
	Linux newest-cni-858719 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cc1cd9bf7531e730eee0e48829fb2f2262509a9acb9a58a449d07c2908258bae] <==
	I1207 23:36:42.048286       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 23:36:42.048568       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1207 23:36:42.048723       1 main.go:148] setting mtu 1500 for CNI 
	I1207 23:36:42.048748       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 23:36:42.048770       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T23:36:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 23:36:42.257782       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 23:36:42.348067       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 23:36:42.348126       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 23:36:42.348355       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 23:36:42.648217       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 23:36:42.648251       1 metrics.go:72] Registering metrics
	I1207 23:36:42.648828       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [60889310640bb67836703a1f3f74d931394169d4bb63a245566fc54bf5762844] <==
	I1207 23:36:40.754679       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:40.754918       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1207 23:36:40.754943       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:40.755103       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1207 23:36:40.755573       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1207 23:36:40.755654       1 aggregator.go:187] initial CRD sync complete...
	I1207 23:36:40.755686       1 autoregister_controller.go:144] Starting autoregister controller
	I1207 23:36:40.755712       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1207 23:36:40.755722       1 cache.go:39] Caches are synced for autoregister controller
	I1207 23:36:40.760055       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1207 23:36:40.762051       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1207 23:36:40.766075       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:40.801611       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1207 23:36:40.905818       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1207 23:36:41.200632       1 controller.go:667] quota admission added evaluator for: namespaces
	I1207 23:36:41.251471       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 23:36:41.285007       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 23:36:41.297758       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 23:36:41.313590       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 23:36:41.382159       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.125.6"}
	I1207 23:36:41.402410       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.227.102"}
	I1207 23:36:41.657235       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1207 23:36:44.418656       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 23:36:44.469747       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:36:44.517425       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [1fde05929ea13b803231bae6fb303618dc3a2b54347fde44f9fc6cbc20d0c478] <==
	I1207 23:36:43.921921       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-858719"
	I1207 23:36:43.922553       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1207 23:36:43.922583       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.922840       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.922889       1 range_allocator.go:177] "Sending events to api server"
	I1207 23:36:43.922938       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1207 23:36:43.922944       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:36:43.922949       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.922989       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.923019       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.921508       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.923051       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.923068       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.923045       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.923419       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.923446       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.923496       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.923525       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.923702       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:43.925999       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:36:43.936401       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:44.021908       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:44.021930       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1207 23:36:44.021936       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1207 23:36:44.026273       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [b16beb4e4b195daeeefa06631cdab33892ab5de00e1eaa4f3d42a32591fc4c36] <==
	I1207 23:36:41.861916       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:36:41.930396       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:36:42.030955       1 shared_informer.go:377] "Caches are synced"
	I1207 23:36:42.031023       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1207 23:36:42.031129       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:36:42.060916       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:36:42.060984       1 server_linux.go:136] "Using iptables Proxier"
	I1207 23:36:42.069384       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:36:42.071021       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1207 23:36:42.071247       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:36:42.078442       1 config.go:200] "Starting service config controller"
	I1207 23:36:42.078481       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:36:42.078623       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:36:42.078656       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:36:42.078683       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:36:42.078688       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:36:42.079316       1 config.go:309] "Starting node config controller"
	I1207 23:36:42.079356       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:36:42.079364       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:36:42.178648       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 23:36:42.178805       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 23:36:42.178802       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [20259f47f9c60903d1615e570f4a362857f9df6b8c1ceeeb7dae4a4a6bddec57] <==
	I1207 23:36:39.100406       1 serving.go:386] Generated self-signed cert in-memory
	W1207 23:36:40.716866       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 23:36:40.716909       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 23:36:40.716921       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 23:36:40.716931       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 23:36:40.737202       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1207 23:36:40.737237       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:36:40.740575       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:36:40.740605       1 shared_informer.go:370] "Waiting for caches to sync"
	I1207 23:36:40.740785       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 23:36:40.741532       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 23:36:40.840941       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: E1207 23:36:41.050089     665 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-858719\" already exists" pod="kube-system/kube-scheduler-newest-cni-858719"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: I1207 23:36:41.433704     665 apiserver.go:52] "Watching apiserver"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: I1207 23:36:41.443279     665 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: E1207 23:36:41.513463     665 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-858719" containerName="kube-controller-manager"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: I1207 23:36:41.514470     665 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-858719"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: I1207 23:36:41.513472     665 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-858719"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: I1207 23:36:41.513591     665 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-858719"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: I1207 23:36:41.516256     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/494a11f1-086c-43f3-92e7-4b59d073c5f9-xtables-lock\") pod \"kube-proxy-p8v8n\" (UID: \"494a11f1-086c-43f3-92e7-4b59d073c5f9\") " pod="kube-system/kube-proxy-p8v8n"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: I1207 23:36:41.516341     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/494a11f1-086c-43f3-92e7-4b59d073c5f9-lib-modules\") pod \"kube-proxy-p8v8n\" (UID: \"494a11f1-086c-43f3-92e7-4b59d073c5f9\") " pod="kube-system/kube-proxy-p8v8n"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: I1207 23:36:41.516380     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8e05261-d743-488e-9543-b60973ff09b4-xtables-lock\") pod \"kindnet-5zzk9\" (UID: \"b8e05261-d743-488e-9543-b60973ff09b4\") " pod="kube-system/kindnet-5zzk9"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: I1207 23:36:41.516403     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b8e05261-d743-488e-9543-b60973ff09b4-cni-cfg\") pod \"kindnet-5zzk9\" (UID: \"b8e05261-d743-488e-9543-b60973ff09b4\") " pod="kube-system/kindnet-5zzk9"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: I1207 23:36:41.516428     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8e05261-d743-488e-9543-b60973ff09b4-lib-modules\") pod \"kindnet-5zzk9\" (UID: \"b8e05261-d743-488e-9543-b60973ff09b4\") " pod="kube-system/kindnet-5zzk9"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: E1207 23:36:41.551109     665 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-858719\" already exists" pod="kube-system/kube-apiserver-newest-cni-858719"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: E1207 23:36:41.551647     665 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-858719\" already exists" pod="kube-system/kube-scheduler-newest-cni-858719"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: E1207 23:36:41.551275     665 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-858719" containerName="kube-apiserver"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: E1207 23:36:41.551846     665 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-858719" containerName="kube-scheduler"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: E1207 23:36:41.552644     665 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-858719\" already exists" pod="kube-system/etcd-newest-cni-858719"
	Dec 07 23:36:41 newest-cni-858719 kubelet[665]: E1207 23:36:41.552820     665 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-858719" containerName="etcd"
	Dec 07 23:36:42 newest-cni-858719 kubelet[665]: E1207 23:36:42.520170     665 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-858719" containerName="etcd"
	Dec 07 23:36:42 newest-cni-858719 kubelet[665]: E1207 23:36:42.520236     665 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-858719" containerName="kube-apiserver"
	Dec 07 23:36:42 newest-cni-858719 kubelet[665]: E1207 23:36:42.520469     665 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-858719" containerName="kube-scheduler"
	Dec 07 23:36:43 newest-cni-858719 kubelet[665]: I1207 23:36:43.297292     665 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 07 23:36:43 newest-cni-858719 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 07 23:36:43 newest-cni-858719 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 07 23:36:43 newest-cni-858719 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-858719 -n newest-cni-858719
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-858719 -n newest-cni-858719: exit status 2 (334.255005ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-858719 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-7d764666f9-dp6qz storage-provisioner dashboard-metrics-scraper-867fb5f87b-4z8k9 kubernetes-dashboard-b84665fb8-fsbs4
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-858719 describe pod coredns-7d764666f9-dp6qz storage-provisioner dashboard-metrics-scraper-867fb5f87b-4z8k9 kubernetes-dashboard-b84665fb8-fsbs4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-858719 describe pod coredns-7d764666f9-dp6qz storage-provisioner dashboard-metrics-scraper-867fb5f87b-4z8k9 kubernetes-dashboard-b84665fb8-fsbs4: exit status 1 (64.489038ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-dp6qz" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-4z8k9" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-fsbs4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-858719 describe pod coredns-7d764666f9-dp6qz storage-provisioner dashboard-metrics-scraper-867fb5f87b-4z8k9 kubernetes-dashboard-b84665fb8-fsbs4: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-312944 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-312944 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (257.303014ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:36:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-312944 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-312944 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-312944 describe deploy/metrics-server -n kube-system: exit status 1 (61.98141ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-312944 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-312944
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-312944:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "df4662170d3c8e92c5a6bf9174e1eb910dbfeaa1b35d09c598d8401172890e61",
	        "Created": "2025-12-07T23:35:53.17207692Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 664383,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:35:53.218222031Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/df4662170d3c8e92c5a6bf9174e1eb910dbfeaa1b35d09c598d8401172890e61/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/df4662170d3c8e92c5a6bf9174e1eb910dbfeaa1b35d09c598d8401172890e61/hostname",
	        "HostsPath": "/var/lib/docker/containers/df4662170d3c8e92c5a6bf9174e1eb910dbfeaa1b35d09c598d8401172890e61/hosts",
	        "LogPath": "/var/lib/docker/containers/df4662170d3c8e92c5a6bf9174e1eb910dbfeaa1b35d09c598d8401172890e61/df4662170d3c8e92c5a6bf9174e1eb910dbfeaa1b35d09c598d8401172890e61-json.log",
	        "Name": "/default-k8s-diff-port-312944",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-312944:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-312944",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "df4662170d3c8e92c5a6bf9174e1eb910dbfeaa1b35d09c598d8401172890e61",
	                "LowerDir": "/var/lib/docker/overlay2/0118ae1fd177a027d3c4130ba6cb419228d15d23a753279249b22be530579070-init/diff:/var/lib/docker/overlay2/d2e9c5481c0f5ed3745e4b3c85b207e8e3f273f5a1d285f7bc7bfa20976ad16e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0118ae1fd177a027d3c4130ba6cb419228d15d23a753279249b22be530579070/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0118ae1fd177a027d3c4130ba6cb419228d15d23a753279249b22be530579070/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0118ae1fd177a027d3c4130ba6cb419228d15d23a753279249b22be530579070/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-312944",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-312944/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-312944",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-312944",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-312944",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d5150bbcce69e4a49221fa2e4100f5e2e160edf01bdd4ce6607d2e92297e4d39",
	            "SandboxKey": "/var/run/docker/netns/d5150bbcce69",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-312944": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "217dc275cbc6467e058b35e68e0b1d3b5b2cb07cc2e90f33cf455ec5c147cec4",
	                    "EndpointID": "04bcb66831580725cd37a8014fdc118f92869b0c3c64bbcb0d7b52fd416c6466",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ba:4e:9d:69:07:fb",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-312944",
	                        "df4662170d3c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-312944 -n default-k8s-diff-port-312944
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-312944 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-312944 logs -n 25: (1.200762788s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ pause   │ -p old-k8s-version-320477 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ delete  │ -p old-k8s-version-320477                                                                                                                                                                                                                            │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p kubernetes-upgrade-703538 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                    │ kubernetes-upgrade-703538    │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ start   │ -p kubernetes-upgrade-703538 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-703538    │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ delete  │ -p old-k8s-version-320477                                                                                                                                                                                                                            │ old-k8s-version-320477       │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ delete  │ -p disable-driver-mounts-837628                                                                                                                                                                                                                      │ disable-driver-mounts-837628 │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p default-k8s-diff-port-312944 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-312944 │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:36 UTC │
	│ delete  │ -p kubernetes-upgrade-703538                                                                                                                                                                                                                         │ kubernetes-upgrade-703538    │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:35 UTC │
	│ start   │ -p newest-cni-858719 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │ 07 Dec 25 23:36 UTC │
	│ addons  │ enable metrics-server -p embed-certs-654118 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:35 UTC │                     │
	│ stop    │ -p embed-certs-654118 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ image   │ no-preload-313006 image list --format=json                                                                                                                                                                                                           │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ pause   │ -p no-preload-313006 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	│ delete  │ -p no-preload-313006                                                                                                                                                                                                                                 │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-858719 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-654118 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ delete  │ -p no-preload-313006                                                                                                                                                                                                                                 │ no-preload-313006            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ start   │ -p embed-certs-654118 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	│ start   │ -p auto-600852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                              │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	│ stop    │ -p newest-cni-858719 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-858719 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ start   │ -p newest-cni-858719 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ image   │ newest-cni-858719 image list --format=json                                                                                                                                                                                                           │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │ 07 Dec 25 23:36 UTC │
	│ pause   │ -p newest-cni-858719 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-858719            │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-312944 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-312944 │ jenkins │ v1.37.0 │ 07 Dec 25 23:36 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:36:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:36:30.199382  677704 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:36:30.199678  677704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:36:30.199690  677704 out.go:374] Setting ErrFile to fd 2...
	I1207 23:36:30.199696  677704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:36:30.199985  677704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:36:30.200696  677704 out.go:368] Setting JSON to false
	I1207 23:36:30.202255  677704 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8334,"bootTime":1765142256,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:36:30.202356  677704 start.go:143] virtualization: kvm guest
	I1207 23:36:30.204485  677704 out.go:179] * [newest-cni-858719] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:36:30.206079  677704 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:36:30.206102  677704 notify.go:221] Checking for updates...
	I1207 23:36:30.208549  677704 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:36:30.209775  677704 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:36:30.214561  677704 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:36:30.215983  677704 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:36:30.217521  677704 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:36:30.219339  677704 config.go:182] Loaded profile config "newest-cni-858719": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:36:30.220075  677704 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:36:30.244737  677704 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:36:30.244935  677704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:36:30.311650  677704 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-07 23:36:30.299453318 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:36:30.311817  677704 docker.go:319] overlay module found
	I1207 23:36:30.315570  677704 out.go:179] * Using the docker driver based on existing profile
	I1207 23:36:30.317497  677704 start.go:309] selected driver: docker
	I1207 23:36:30.317524  677704 start.go:927] validating driver "docker" against &{Name:newest-cni-858719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:36:30.317669  677704 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:36:30.318487  677704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:36:30.399830  677704 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-07 23:36:30.383304383 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:36:30.401873  677704 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1207 23:36:30.401972  677704 cni.go:84] Creating CNI manager for ""
	I1207 23:36:30.402072  677704 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:36:30.402132  677704 start.go:353] cluster config:
	{Name:newest-cni-858719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:36:30.404155  677704 out.go:179] * Starting "newest-cni-858719" primary control-plane node in "newest-cni-858719" cluster
	I1207 23:36:30.405367  677704 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:36:30.406789  677704 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:36:30.408087  677704 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1207 23:36:30.408131  677704 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1207 23:36:30.408146  677704 cache.go:65] Caching tarball of preloaded images
	I1207 23:36:30.408265  677704 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:36:30.408277  677704 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1207 23:36:30.408426  677704 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/config.json ...
	I1207 23:36:30.408463  677704 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:36:30.446133  677704 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:36:30.446161  677704 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:36:30.446179  677704 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:36:30.446224  677704 start.go:360] acquireMachinesLock for newest-cni-858719: {Name:mk3f9783a06cd72eff911e9615fc59e854b06695 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:36:30.446291  677704 start.go:364] duration metric: took 37.32µs to acquireMachinesLock for "newest-cni-858719"
	I1207 23:36:30.446316  677704 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:36:30.446340  677704 fix.go:54] fixHost starting: 
	I1207 23:36:30.446637  677704 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:30.475469  677704 fix.go:112] recreateIfNeeded on newest-cni-858719: state=Stopped err=<nil>
	W1207 23:36:30.475505  677704 fix.go:138] unexpected machine state, will restart: <nil>
	I1207 23:36:30.014058  673565 start.go:296] duration metric: took 177.314443ms for postStartSetup
	I1207 23:36:30.014519  673565 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-600852
	I1207 23:36:30.038610  673565 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/config.json ...
	I1207 23:36:30.038964  673565 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:36:30.039016  673565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-600852
	I1207 23:36:30.065777  673565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/auto-600852/id_rsa Username:docker}
	I1207 23:36:30.162043  673565 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:36:30.167397  673565 start.go:128] duration metric: took 9.888881461s to createHost
	I1207 23:36:30.167425  673565 start.go:83] releasing machines lock for "auto-600852", held for 9.889029296s
	I1207 23:36:30.167504  673565 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-600852
	I1207 23:36:30.187852  673565 ssh_runner.go:195] Run: cat /version.json
	I1207 23:36:30.187898  673565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-600852
	I1207 23:36:30.187900  673565 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:36:30.187983  673565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-600852
	I1207 23:36:30.209183  673565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/auto-600852/id_rsa Username:docker}
	I1207 23:36:30.209573  673565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/auto-600852/id_rsa Username:docker}
	I1207 23:36:30.375401  673565 ssh_runner.go:195] Run: systemctl --version
	I1207 23:36:30.387970  673565 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:36:30.451937  673565 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:36:30.463288  673565 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:36:30.463383  673565 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:36:30.500519  673565 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 23:36:30.500548  673565 start.go:496] detecting cgroup driver to use...
	I1207 23:36:30.500586  673565 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:36:30.500644  673565 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:36:30.523553  673565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:36:30.542090  673565 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:36:30.542193  673565 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:36:30.562685  673565 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:36:30.590093  673565 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:36:30.714368  673565 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:36:30.834479  673565 docker.go:234] disabling docker service ...
	I1207 23:36:30.834549  673565 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:36:30.869941  673565 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:36:30.894568  673565 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:36:31.002667  673565 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:36:31.119924  673565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:36:31.142153  673565 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:36:31.163106  673565 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:36:31.163177  673565 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:31.174874  673565 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:36:31.174957  673565 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:31.186962  673565 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:31.197787  673565 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:31.208567  673565 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:36:31.217977  673565 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:31.228985  673565 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:31.243864  673565 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:31.253438  673565 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:36:31.261577  673565 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:36:31.269437  673565 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:36:31.349977  673565 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:36:31.501537  673565 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:36:31.501610  673565 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:36:31.507081  673565 start.go:564] Will wait 60s for crictl version
	I1207 23:36:31.507153  673565 ssh_runner.go:195] Run: which crictl
	I1207 23:36:31.511425  673565 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:36:31.539351  673565 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:36:31.539441  673565 ssh_runner.go:195] Run: crio --version
	I1207 23:36:31.569558  673565 ssh_runner.go:195] Run: crio --version
	I1207 23:36:31.600664  673565 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1207 23:36:30.349629  663227 node_ready.go:57] node "default-k8s-diff-port-312944" has "Ready":"False" status (will retry)
	I1207 23:36:30.849610  663227 node_ready.go:49] node "default-k8s-diff-port-312944" is "Ready"
	I1207 23:36:30.849651  663227 node_ready.go:38] duration metric: took 11.006384498s for node "default-k8s-diff-port-312944" to be "Ready" ...
	I1207 23:36:30.849671  663227 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:36:30.849731  663227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:36:30.872873  663227 api_server.go:72] duration metric: took 11.455368709s to wait for apiserver process to appear ...
	I1207 23:36:30.873121  663227 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:36:30.873147  663227 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1207 23:36:30.882134  663227 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1207 23:36:30.883434  663227 api_server.go:141] control plane version: v1.34.2
	I1207 23:36:30.883472  663227 api_server.go:131] duration metric: took 10.341551ms to wait for apiserver health ...
	I1207 23:36:30.883493  663227 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:36:30.888989  663227 system_pods.go:59] 8 kube-system pods found
	I1207 23:36:30.889030  663227 system_pods.go:61] "coredns-66bc5c9577-p4v2f" [113d6978-708b-4941-acbc-0fa4a639f318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:36:30.889038  663227 system_pods.go:61] "etcd-default-k8s-diff-port-312944" [569e31ea-e77d-4156-a9f2-0970afca17bd] Running
	I1207 23:36:30.889046  663227 system_pods.go:61] "kindnet-55xbl" [627ffd8d-a2eb-4d9c-b1bc-a71f609273bc] Running
	I1207 23:36:30.889052  663227 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-312944" [a2d3f5cd-a118-448c-a233-a6fe616b5b6d] Running
	I1207 23:36:30.889058  663227 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-312944" [b5eaf61f-ba8d-4d44-8f2c-eb9ebae5e285] Running
	I1207 23:36:30.889063  663227 system_pods.go:61] "kube-proxy-7stg5" [b7e00d0a-bd16-45c1-a58c-e0569a0bcb33] Running
	I1207 23:36:30.889069  663227 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-312944" [ddd21134-7272-4134-8cc5-5fd8abb6abf5] Running
	I1207 23:36:30.889076  663227 system_pods.go:61] "storage-provisioner" [adffbdc2-708d-4f45-9f91-1697332156e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:36:30.889086  663227 system_pods.go:74] duration metric: took 5.585227ms to wait for pod list to return data ...
	I1207 23:36:30.889097  663227 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:36:30.892279  663227 default_sa.go:45] found service account: "default"
	I1207 23:36:30.892306  663227 default_sa.go:55] duration metric: took 3.201148ms for default service account to be created ...
	I1207 23:36:30.892318  663227 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 23:36:30.896636  663227 system_pods.go:86] 8 kube-system pods found
	I1207 23:36:30.896687  663227 system_pods.go:89] "coredns-66bc5c9577-p4v2f" [113d6978-708b-4941-acbc-0fa4a639f318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:36:30.896696  663227 system_pods.go:89] "etcd-default-k8s-diff-port-312944" [569e31ea-e77d-4156-a9f2-0970afca17bd] Running
	I1207 23:36:30.896704  663227 system_pods.go:89] "kindnet-55xbl" [627ffd8d-a2eb-4d9c-b1bc-a71f609273bc] Running
	I1207 23:36:30.896710  663227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-312944" [a2d3f5cd-a118-448c-a233-a6fe616b5b6d] Running
	I1207 23:36:30.896735  663227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-312944" [b5eaf61f-ba8d-4d44-8f2c-eb9ebae5e285] Running
	I1207 23:36:30.896745  663227 system_pods.go:89] "kube-proxy-7stg5" [b7e00d0a-bd16-45c1-a58c-e0569a0bcb33] Running
	I1207 23:36:30.896751  663227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-312944" [ddd21134-7272-4134-8cc5-5fd8abb6abf5] Running
	I1207 23:36:30.896758  663227 system_pods.go:89] "storage-provisioner" [adffbdc2-708d-4f45-9f91-1697332156e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:36:30.896786  663227 retry.go:31] will retry after 222.292044ms: missing components: kube-dns
	I1207 23:36:31.126979  663227 system_pods.go:86] 8 kube-system pods found
	I1207 23:36:31.127080  663227 system_pods.go:89] "coredns-66bc5c9577-p4v2f" [113d6978-708b-4941-acbc-0fa4a639f318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:36:31.127100  663227 system_pods.go:89] "etcd-default-k8s-diff-port-312944" [569e31ea-e77d-4156-a9f2-0970afca17bd] Running
	I1207 23:36:31.127109  663227 system_pods.go:89] "kindnet-55xbl" [627ffd8d-a2eb-4d9c-b1bc-a71f609273bc] Running
	I1207 23:36:31.127120  663227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-312944" [a2d3f5cd-a118-448c-a233-a6fe616b5b6d] Running
	I1207 23:36:31.127129  663227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-312944" [b5eaf61f-ba8d-4d44-8f2c-eb9ebae5e285] Running
	I1207 23:36:31.127135  663227 system_pods.go:89] "kube-proxy-7stg5" [b7e00d0a-bd16-45c1-a58c-e0569a0bcb33] Running
	I1207 23:36:31.127139  663227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-312944" [ddd21134-7272-4134-8cc5-5fd8abb6abf5] Running
	I1207 23:36:31.127147  663227 system_pods.go:89] "storage-provisioner" [adffbdc2-708d-4f45-9f91-1697332156e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:36:31.127169  663227 retry.go:31] will retry after 307.291664ms: missing components: kube-dns
	I1207 23:36:31.440222  663227 system_pods.go:86] 8 kube-system pods found
	I1207 23:36:31.440265  663227 system_pods.go:89] "coredns-66bc5c9577-p4v2f" [113d6978-708b-4941-acbc-0fa4a639f318] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:36:31.440273  663227 system_pods.go:89] "etcd-default-k8s-diff-port-312944" [569e31ea-e77d-4156-a9f2-0970afca17bd] Running
	I1207 23:36:31.440283  663227 system_pods.go:89] "kindnet-55xbl" [627ffd8d-a2eb-4d9c-b1bc-a71f609273bc] Running
	I1207 23:36:31.440290  663227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-312944" [a2d3f5cd-a118-448c-a233-a6fe616b5b6d] Running
	I1207 23:36:31.440295  663227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-312944" [b5eaf61f-ba8d-4d44-8f2c-eb9ebae5e285] Running
	I1207 23:36:31.440302  663227 system_pods.go:89] "kube-proxy-7stg5" [b7e00d0a-bd16-45c1-a58c-e0569a0bcb33] Running
	I1207 23:36:31.440307  663227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-312944" [ddd21134-7272-4134-8cc5-5fd8abb6abf5] Running
	I1207 23:36:31.440314  663227 system_pods.go:89] "storage-provisioner" [adffbdc2-708d-4f45-9f91-1697332156e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:36:31.440354  663227 retry.go:31] will retry after 426.001876ms: missing components: kube-dns
	I1207 23:36:31.871913  663227 system_pods.go:86] 8 kube-system pods found
	I1207 23:36:31.871946  663227 system_pods.go:89] "coredns-66bc5c9577-p4v2f" [113d6978-708b-4941-acbc-0fa4a639f318] Running
	I1207 23:36:31.871953  663227 system_pods.go:89] "etcd-default-k8s-diff-port-312944" [569e31ea-e77d-4156-a9f2-0970afca17bd] Running
	I1207 23:36:31.871957  663227 system_pods.go:89] "kindnet-55xbl" [627ffd8d-a2eb-4d9c-b1bc-a71f609273bc] Running
	I1207 23:36:31.871961  663227 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-312944" [a2d3f5cd-a118-448c-a233-a6fe616b5b6d] Running
	I1207 23:36:31.871968  663227 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-312944" [b5eaf61f-ba8d-4d44-8f2c-eb9ebae5e285] Running
	I1207 23:36:31.871973  663227 system_pods.go:89] "kube-proxy-7stg5" [b7e00d0a-bd16-45c1-a58c-e0569a0bcb33] Running
	I1207 23:36:31.871978  663227 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-312944" [ddd21134-7272-4134-8cc5-5fd8abb6abf5] Running
	I1207 23:36:31.871982  663227 system_pods.go:89] "storage-provisioner" [adffbdc2-708d-4f45-9f91-1697332156e3] Running
	I1207 23:36:31.871993  663227 system_pods.go:126] duration metric: took 979.653637ms to wait for k8s-apps to be running ...
	I1207 23:36:31.872008  663227 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:36:31.872059  663227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:36:31.886268  663227 system_svc.go:56] duration metric: took 14.248421ms WaitForService to wait for kubelet
	I1207 23:36:31.886301  663227 kubeadm.go:587] duration metric: took 12.468803502s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:36:31.886319  663227 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:36:31.889484  663227 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:36:31.889518  663227 node_conditions.go:123] node cpu capacity is 8
	I1207 23:36:31.889536  663227 node_conditions.go:105] duration metric: took 3.211978ms to run NodePressure ...
	I1207 23:36:31.889549  663227 start.go:242] waiting for startup goroutines ...
	I1207 23:36:31.889557  663227 start.go:247] waiting for cluster config update ...
	I1207 23:36:31.889567  663227 start.go:256] writing updated cluster config ...
	I1207 23:36:31.889825  663227 ssh_runner.go:195] Run: rm -f paused
	I1207 23:36:31.893873  663227 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:36:31.900462  663227 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p4v2f" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:31.905254  663227 pod_ready.go:94] pod "coredns-66bc5c9577-p4v2f" is "Ready"
	I1207 23:36:31.905281  663227 pod_ready.go:86] duration metric: took 4.791855ms for pod "coredns-66bc5c9577-p4v2f" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:31.908065  663227 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:31.913118  663227 pod_ready.go:94] pod "etcd-default-k8s-diff-port-312944" is "Ready"
	I1207 23:36:31.913140  663227 pod_ready.go:86] duration metric: took 5.030101ms for pod "etcd-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:31.914938  663227 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:31.918718  663227 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-312944" is "Ready"
	I1207 23:36:31.918742  663227 pod_ready.go:86] duration metric: took 3.786001ms for pod "kube-apiserver-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:31.920411  663227 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:32.299220  663227 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-312944" is "Ready"
	I1207 23:36:32.299254  663227 pod_ready.go:86] duration metric: took 378.816082ms for pod "kube-controller-manager-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:32.498428  663227 pod_ready.go:83] waiting for pod "kube-proxy-7stg5" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:32.898763  663227 pod_ready.go:94] pod "kube-proxy-7stg5" is "Ready"
	I1207 23:36:32.898796  663227 pod_ready.go:86] duration metric: took 400.341199ms for pod "kube-proxy-7stg5" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:33.099537  663227 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:33.499044  663227 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-312944" is "Ready"
	I1207 23:36:33.499080  663227 pod_ready.go:86] duration metric: took 399.514812ms for pod "kube-scheduler-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:36:33.499097  663227 pod_ready.go:40] duration metric: took 1.605186446s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:36:33.554736  663227 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1207 23:36:33.556778  663227 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-312944" cluster and "default" namespace by default
	I1207 23:36:30.017812  673247 addons.go:530] duration metric: took 2.349839549s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1207 23:36:30.500546  673247 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1207 23:36:30.506524  673247 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1207 23:36:30.506554  673247 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1207 23:36:30.999819  673247 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1207 23:36:31.005585  673247 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1207 23:36:31.006742  673247 api_server.go:141] control plane version: v1.34.2
	I1207 23:36:31.006775  673247 api_server.go:131] duration metric: took 1.007113458s to wait for apiserver health ...
	I1207 23:36:31.006788  673247 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:36:31.011558  673247 system_pods.go:59] 8 kube-system pods found
	I1207 23:36:31.011611  673247 system_pods.go:61] "coredns-66bc5c9577-wvgqf" [80c1683b-a66c-4dd4-8d91-0e5cc2bd5e18] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:36:31.011624  673247 system_pods.go:61] "etcd-embed-certs-654118" [b79ec937-fed7-4df6-9a57-24d6513402e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:36:31.011635  673247 system_pods.go:61] "kindnet-68q87" [7fc0d1b0-080b-4e1c-b7b4-cd23aa94620a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1207 23:36:31.011645  673247 system_pods.go:61] "kube-apiserver-embed-certs-654118" [f6fab7ae-3dd9-48d2-8b83-9f72e33bbee1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:36:31.011655  673247 system_pods.go:61] "kube-controller-manager-embed-certs-654118" [9748b389-d642-4475-bc81-39199511f4d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:36:31.011664  673247 system_pods.go:61] "kube-proxy-l75b2" [2f061a54-3641-473d-9c6a-77e51062e955] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 23:36:31.011671  673247 system_pods.go:61] "kube-scheduler-embed-certs-654118" [eb585812-9353-43b0-a610-34f3fcb6d32f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:36:31.011678  673247 system_pods.go:61] "storage-provisioner" [34685d0c-67b3-4683-b817-772fa2ef1c77] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:36:31.011701  673247 system_pods.go:74] duration metric: took 4.903872ms to wait for pod list to return data ...
	I1207 23:36:31.011712  673247 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:36:31.014761  673247 default_sa.go:45] found service account: "default"
	I1207 23:36:31.014791  673247 default_sa.go:55] duration metric: took 3.070892ms for default service account to be created ...
	I1207 23:36:31.014804  673247 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 23:36:31.018030  673247 system_pods.go:86] 8 kube-system pods found
	I1207 23:36:31.018077  673247 system_pods.go:89] "coredns-66bc5c9577-wvgqf" [80c1683b-a66c-4dd4-8d91-0e5cc2bd5e18] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:36:31.018089  673247 system_pods.go:89] "etcd-embed-certs-654118" [b79ec937-fed7-4df6-9a57-24d6513402e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:36:31.018098  673247 system_pods.go:89] "kindnet-68q87" [7fc0d1b0-080b-4e1c-b7b4-cd23aa94620a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1207 23:36:31.018106  673247 system_pods.go:89] "kube-apiserver-embed-certs-654118" [f6fab7ae-3dd9-48d2-8b83-9f72e33bbee1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:36:31.018121  673247 system_pods.go:89] "kube-controller-manager-embed-certs-654118" [9748b389-d642-4475-bc81-39199511f4d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:36:31.018134  673247 system_pods.go:89] "kube-proxy-l75b2" [2f061a54-3641-473d-9c6a-77e51062e955] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 23:36:31.018142  673247 system_pods.go:89] "kube-scheduler-embed-certs-654118" [eb585812-9353-43b0-a610-34f3fcb6d32f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:36:31.018148  673247 system_pods.go:89] "storage-provisioner" [34685d0c-67b3-4683-b817-772fa2ef1c77] Running
	I1207 23:36:31.018164  673247 system_pods.go:126] duration metric: took 3.352378ms to wait for k8s-apps to be running ...
	I1207 23:36:31.018176  673247 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:36:31.018232  673247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:36:31.034999  673247 system_svc.go:56] duration metric: took 16.811304ms WaitForService to wait for kubelet
	I1207 23:36:31.035038  673247 kubeadm.go:587] duration metric: took 3.36708951s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:36:31.035063  673247 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:36:31.037964  673247 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:36:31.037997  673247 node_conditions.go:123] node cpu capacity is 8
	I1207 23:36:31.038017  673247 node_conditions.go:105] duration metric: took 2.947717ms to run NodePressure ...
	I1207 23:36:31.038038  673247 start.go:242] waiting for startup goroutines ...
	I1207 23:36:31.038047  673247 start.go:247] waiting for cluster config update ...
	I1207 23:36:31.038060  673247 start.go:256] writing updated cluster config ...
	I1207 23:36:31.038388  673247 ssh_runner.go:195] Run: rm -f paused
	I1207 23:36:31.045933  673247 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:36:31.051360  673247 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wvgqf" in "kube-system" namespace to be "Ready" or be gone ...
	W1207 23:36:33.056839  673247 pod_ready.go:104] pod "coredns-66bc5c9577-wvgqf" is not "Ready", error: <nil>
	I1207 23:36:31.601878  673565 cli_runner.go:164] Run: docker network inspect auto-600852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:36:31.621720  673565 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1207 23:36:31.626504  673565 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:36:31.638820  673565 kubeadm.go:884] updating cluster {Name:auto-600852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:36:31.638979  673565 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:36:31.639045  673565 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:36:31.671512  673565 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:36:31.671537  673565 crio.go:433] Images already preloaded, skipping extraction
	I1207 23:36:31.671584  673565 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:36:31.698600  673565 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:36:31.698621  673565 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:36:31.698629  673565 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1207 23:36:31.698758  673565 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-600852 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:auto-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:36:31.698849  673565 ssh_runner.go:195] Run: crio config
	I1207 23:36:31.748038  673565 cni.go:84] Creating CNI manager for ""
	I1207 23:36:31.748064  673565 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:36:31.748082  673565 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 23:36:31.748110  673565 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-600852 NodeName:auto-600852 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:36:31.748274  673565 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-600852"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:36:31.748395  673565 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:36:31.757145  673565 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:36:31.757219  673565 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 23:36:31.766099  673565 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1207 23:36:31.779629  673565 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:36:31.800018  673565 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1207 23:36:31.817264  673565 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1207 23:36:31.822473  673565 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:36:31.834622  673565 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:36:31.928227  673565 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:36:31.958251  673565 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852 for IP: 192.168.85.2
	I1207 23:36:31.958272  673565 certs.go:195] generating shared ca certs ...
	I1207 23:36:31.958288  673565 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:31.958457  673565 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:36:31.958513  673565 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:36:31.958523  673565 certs.go:257] generating profile certs ...
	I1207 23:36:31.958577  673565 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/client.key
	I1207 23:36:31.958592  673565 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/client.crt with IP's: []
	I1207 23:36:32.182791  673565 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/client.crt ...
	I1207 23:36:32.182826  673565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/client.crt: {Name:mkcb703f0f9e4b0a56f30bafc152e39ee98c32af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:32.183061  673565 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/client.key ...
	I1207 23:36:32.183086  673565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/client.key: {Name:mk33e4c8c1a1e58f23780f89a8c200357fe9af2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:32.183245  673565 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.key.5c32f241
	I1207 23:36:32.183269  673565 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.crt.5c32f241 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1207 23:36:32.472518  673565 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.crt.5c32f241 ...
	I1207 23:36:32.472552  673565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.crt.5c32f241: {Name:mkd72f567c38cb3b6e2eeb019eb8803d7c9b9ebc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:32.472743  673565 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.key.5c32f241 ...
	I1207 23:36:32.472756  673565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.key.5c32f241: {Name:mk6a31094374001ab612b14e9c18e5030a69691d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:32.472836  673565 certs.go:382] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.crt.5c32f241 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.crt
	I1207 23:36:32.472933  673565 certs.go:386] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.key.5c32f241 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.key
	I1207 23:36:32.472997  673565 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.key
	I1207 23:36:32.473022  673565 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.crt with IP's: []
	I1207 23:36:32.610842  673565 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.crt ...
	I1207 23:36:32.610871  673565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.crt: {Name:mkdfed3c317c9a9b5274d2282923661c521bedc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:32.611075  673565 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.key ...
	I1207 23:36:32.611096  673565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.key: {Name:mk38fd78995b6a1d76b48fda10f3d7ef0f5e91f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:32.611376  673565 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:36:32.611433  673565 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:36:32.611449  673565 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:36:32.611509  673565 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:36:32.611544  673565 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:36:32.611577  673565 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:36:32.611637  673565 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:36:32.612219  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:36:32.631785  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:36:32.651000  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:36:32.670569  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:36:32.690024  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1207 23:36:32.708926  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 23:36:32.727240  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:36:32.751398  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/auto-600852/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 23:36:32.776129  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:36:32.799218  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:36:32.818906  673565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:36:32.839578  673565 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:36:32.853944  673565 ssh_runner.go:195] Run: openssl version
	I1207 23:36:32.860417  673565 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:32.869087  673565 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:36:32.877433  673565 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:32.881465  673565 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:32.881547  673565 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:32.920658  673565 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:36:32.928919  673565 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1207 23:36:32.937680  673565 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:36:32.945804  673565 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:36:32.955606  673565 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:36:32.959865  673565 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:36:32.959922  673565 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:36:32.996040  673565 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:36:33.004381  673565 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/393125.pem /etc/ssl/certs/51391683.0
	I1207 23:36:33.012360  673565 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:36:33.020201  673565 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:36:33.028224  673565 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:36:33.032626  673565 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:36:33.032716  673565 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:36:33.069017  673565 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:36:33.078318  673565 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3931252.pem /etc/ssl/certs/3ec20f2e.0
	I1207 23:36:33.086473  673565 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:36:33.090434  673565 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1207 23:36:33.090491  673565 kubeadm.go:401] StartCluster: {Name:auto-600852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:auto-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:36:33.090588  673565 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:36:33.090632  673565 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:36:33.118539  673565 cri.go:89] found id: ""
	I1207 23:36:33.118605  673565 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:36:33.127222  673565 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 23:36:33.135780  673565 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1207 23:36:33.135833  673565 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 23:36:33.144151  673565 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 23:36:33.144172  673565 kubeadm.go:158] found existing configuration files:
	
	I1207 23:36:33.144215  673565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1207 23:36:33.152854  673565 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1207 23:36:33.152928  673565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1207 23:36:33.160896  673565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1207 23:36:33.168822  673565 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1207 23:36:33.168877  673565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1207 23:36:33.176284  673565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1207 23:36:33.184383  673565 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1207 23:36:33.184442  673565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1207 23:36:33.193714  673565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1207 23:36:33.202016  673565 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1207 23:36:33.202077  673565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1207 23:36:33.210129  673565 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1207 23:36:33.271747  673565 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1207 23:36:33.334835  673565 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 23:36:30.477140  677704 out.go:252] * Restarting existing docker container for "newest-cni-858719" ...
	I1207 23:36:30.477215  677704 cli_runner.go:164] Run: docker start newest-cni-858719
	I1207 23:36:30.809394  677704 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:30.836380  677704 kic.go:430] container "newest-cni-858719" state is running.
	I1207 23:36:30.836921  677704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-858719
	I1207 23:36:30.866477  677704 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/config.json ...
	I1207 23:36:30.866809  677704 machine.go:94] provisionDockerMachine start ...
	I1207 23:36:30.866882  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:30.898514  677704 main.go:143] libmachine: Using SSH client type: native
	I1207 23:36:30.898872  677704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1207 23:36:30.898893  677704 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:36:30.899781  677704 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50554->127.0.0.1:33473: read: connection reset by peer
	I1207 23:36:34.032697  677704 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-858719
	
	I1207 23:36:34.032735  677704 ubuntu.go:182] provisioning hostname "newest-cni-858719"
	I1207 23:36:34.032802  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:34.054768  677704 main.go:143] libmachine: Using SSH client type: native
	I1207 23:36:34.055076  677704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1207 23:36:34.055103  677704 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-858719 && echo "newest-cni-858719" | sudo tee /etc/hostname
	I1207 23:36:34.201076  677704 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-858719
	
	I1207 23:36:34.201188  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:34.220957  677704 main.go:143] libmachine: Using SSH client type: native
	I1207 23:36:34.221305  677704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1207 23:36:34.221350  677704 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-858719' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-858719/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-858719' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:36:34.354180  677704 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:36:34.354212  677704 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:36:34.354255  677704 ubuntu.go:190] setting up certificates
	I1207 23:36:34.354268  677704 provision.go:84] configureAuth start
	I1207 23:36:34.354381  677704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-858719
	I1207 23:36:34.372396  677704 provision.go:143] copyHostCerts
	I1207 23:36:34.372463  677704 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:36:34.372474  677704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:36:34.372543  677704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:36:34.372653  677704 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:36:34.372662  677704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:36:34.372691  677704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:36:34.372767  677704 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:36:34.372775  677704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:36:34.372800  677704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:36:34.372863  677704 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.newest-cni-858719 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-858719]
	I1207 23:36:34.438526  677704 provision.go:177] copyRemoteCerts
	I1207 23:36:34.438610  677704 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:36:34.438661  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:34.457056  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:34.550753  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:36:34.569684  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1207 23:36:34.587851  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 23:36:34.605253  677704 provision.go:87] duration metric: took 250.964673ms to configureAuth
	I1207 23:36:34.605281  677704 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:36:34.605478  677704 config.go:182] Loaded profile config "newest-cni-858719": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:36:34.605592  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:34.623964  677704 main.go:143] libmachine: Using SSH client type: native
	I1207 23:36:34.624277  677704 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1207 23:36:34.624303  677704 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:36:34.919543  677704 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:36:34.919573  677704 machine.go:97] duration metric: took 4.052749993s to provisionDockerMachine
	I1207 23:36:34.919588  677704 start.go:293] postStartSetup for "newest-cni-858719" (driver="docker")
	I1207 23:36:34.919604  677704 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:36:34.919670  677704 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:36:34.919713  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:34.940317  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:35.042131  677704 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:36:35.047382  677704 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:36:35.047431  677704 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:36:35.047446  677704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:36:35.047504  677704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:36:35.047605  677704 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:36:35.047744  677704 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:36:35.059463  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:36:35.084378  677704 start.go:296] duration metric: took 164.724573ms for postStartSetup
	I1207 23:36:35.084483  677704 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:36:35.084536  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:35.108317  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:35.212214  677704 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:36:35.219280  677704 fix.go:56] duration metric: took 4.772929293s for fixHost
	I1207 23:36:35.219313  677704 start.go:83] releasing machines lock for "newest-cni-858719", held for 4.773005701s
	I1207 23:36:35.219452  677704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-858719
	I1207 23:36:35.245630  677704 ssh_runner.go:195] Run: cat /version.json
	I1207 23:36:35.245689  677704 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:36:35.245694  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:35.245779  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:35.270514  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:35.270842  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:35.457960  677704 ssh_runner.go:195] Run: systemctl --version
	I1207 23:36:35.466947  677704 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:36:35.513529  677704 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:36:35.519931  677704 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:36:35.520007  677704 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:36:35.531091  677704 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 23:36:35.531122  677704 start.go:496] detecting cgroup driver to use...
	I1207 23:36:35.531158  677704 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:36:35.531220  677704 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:36:35.552715  677704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:36:35.570570  677704 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:36:35.570644  677704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:36:35.591911  677704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:36:35.609216  677704 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:36:35.730291  677704 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:36:35.849860  677704 docker.go:234] disabling docker service ...
	I1207 23:36:35.849939  677704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:36:35.870164  677704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:36:35.887316  677704 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:36:36.010320  677704 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:36:36.134166  677704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:36:36.151763  677704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:36:36.171658  677704 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:36:36.171724  677704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:36.185507  677704 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:36:36.185577  677704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:36.199807  677704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:36.212561  677704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:36.224857  677704 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:36:36.236376  677704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:36.248851  677704 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:36.260134  677704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:36:36.271388  677704 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:36:36.282450  677704 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:36:36.292401  677704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:36:36.402590  677704 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:36:36.781588  677704 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:36:36.781654  677704 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:36:36.787090  677704 start.go:564] Will wait 60s for crictl version
	I1207 23:36:36.787149  677704 ssh_runner.go:195] Run: which crictl
	I1207 23:36:36.792213  677704 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:36:36.824404  677704 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:36:36.824506  677704 ssh_runner.go:195] Run: crio --version
	I1207 23:36:36.862950  677704 ssh_runner.go:195] Run: crio --version
	I1207 23:36:36.905770  677704 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1207 23:36:36.907106  677704 cli_runner.go:164] Run: docker network inspect newest-cni-858719 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:36:36.931941  677704 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1207 23:36:36.937364  677704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:36:36.953376  677704 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1207 23:36:36.954739  677704 kubeadm.go:884] updating cluster {Name:newest-cni-858719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:36:36.954910  677704 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1207 23:36:36.954978  677704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:36:37.001232  677704 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:36:37.001289  677704 crio.go:433] Images already preloaded, skipping extraction
	I1207 23:36:37.001372  677704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:36:37.035868  677704 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:36:37.035911  677704 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:36:37.035920  677704 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1207 23:36:37.036047  677704 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-858719 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:36:37.036135  677704 ssh_runner.go:195] Run: crio config
	I1207 23:36:37.100859  677704 cni.go:84] Creating CNI manager for ""
	I1207 23:36:37.100891  677704 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:36:37.100916  677704 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1207 23:36:37.100949  677704 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-858719 NodeName:newest-cni-858719 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:36:37.101134  677704 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-858719"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:36:37.101225  677704 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1207 23:36:37.112723  677704 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:36:37.112803  677704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 23:36:37.124443  677704 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1207 23:36:37.142815  677704 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1207 23:36:37.160115  677704 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1207 23:36:37.177233  677704 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1207 23:36:37.182248  677704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:36:37.195883  677704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:36:37.321978  677704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:36:37.349434  677704 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719 for IP: 192.168.76.2
	I1207 23:36:37.349460  677704 certs.go:195] generating shared ca certs ...
	I1207 23:36:37.349483  677704 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:37.349673  677704 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:36:37.349732  677704 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:36:37.349742  677704 certs.go:257] generating profile certs ...
	I1207 23:36:37.349907  677704 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/client.key
	I1207 23:36:37.349978  677704 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.key.81fe4363
	I1207 23:36:37.350036  677704 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.key
	I1207 23:36:37.350178  677704 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:36:37.350217  677704 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:36:37.350228  677704 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:36:37.350264  677704 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:36:37.350296  677704 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:36:37.350347  677704 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:36:37.350407  677704 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:36:37.351226  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:36:37.377735  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:36:37.403808  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:36:37.427723  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:36:37.460810  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1207 23:36:37.487067  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 23:36:37.513861  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:36:37.539259  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/newest-cni-858719/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 23:36:37.565376  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:36:37.592124  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:36:37.619212  677704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:36:37.647272  677704 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:36:37.667351  677704 ssh_runner.go:195] Run: openssl version
	I1207 23:36:37.676513  677704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:36:37.687971  677704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:36:37.699159  677704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:36:37.704977  677704 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:36:37.705049  677704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:36:37.765716  677704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:36:37.779131  677704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:36:37.793745  677704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:36:37.805547  677704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:36:37.811144  677704 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:36:37.811212  677704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:36:37.854651  677704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:36:37.863269  677704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:37.872157  677704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:36:37.881013  677704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:37.886652  677704 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:37.886726  677704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:36:37.925060  677704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:36:37.933601  677704 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:36:37.937936  677704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 23:36:37.974013  677704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 23:36:38.011069  677704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 23:36:38.048975  677704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 23:36:38.089220  677704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 23:36:38.126552  677704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 23:36:38.171830  677704 kubeadm.go:401] StartCluster: {Name:newest-cni-858719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-858719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:36:38.171932  677704 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:36:38.171998  677704 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:36:38.202873  677704 cri.go:89] found id: ""
	I1207 23:36:38.202948  677704 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:36:38.211787  677704 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1207 23:36:38.211805  677704 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1207 23:36:38.211858  677704 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 23:36:38.220804  677704 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:36:38.221673  677704 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-858719" does not appear in /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:36:38.222177  677704 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-389542/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-858719" cluster setting kubeconfig missing "newest-cni-858719" context setting]
	I1207 23:36:38.222947  677704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:38.242108  677704 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 23:36:38.251961  677704 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1207 23:36:38.251999  677704 kubeadm.go:602] duration metric: took 40.189524ms to restartPrimaryControlPlane
	I1207 23:36:38.252009  677704 kubeadm.go:403] duration metric: took 80.190889ms to StartCluster
	I1207 23:36:38.252030  677704 settings.go:142] acquiring lock: {Name:mk372e79badb9c8f25216fa891cff6dfa96ea2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:38.252111  677704 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:36:38.253734  677704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:36:38.296126  677704 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:36:38.296231  677704 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1207 23:36:38.296364  677704 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-858719"
	I1207 23:36:38.296391  677704 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-858719"
	I1207 23:36:38.296385  677704 addons.go:70] Setting dashboard=true in profile "newest-cni-858719"
	W1207 23:36:38.296403  677704 addons.go:248] addon storage-provisioner should already be in state true
	I1207 23:36:38.296420  677704 addons.go:239] Setting addon dashboard=true in "newest-cni-858719"
	W1207 23:36:38.296437  677704 addons.go:248] addon dashboard should already be in state true
	I1207 23:36:38.296445  677704 host.go:66] Checking if "newest-cni-858719" exists ...
	I1207 23:36:38.296432  677704 addons.go:70] Setting default-storageclass=true in profile "newest-cni-858719"
	I1207 23:36:38.296468  677704 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-858719"
	I1207 23:36:38.296475  677704 host.go:66] Checking if "newest-cni-858719" exists ...
	I1207 23:36:38.296480  677704 config.go:182] Loaded profile config "newest-cni-858719": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:36:38.296903  677704 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:38.296913  677704 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:38.296916  677704 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:38.304834  677704 out.go:179] * Verifying Kubernetes components...
	I1207 23:36:38.321121  677704 addons.go:239] Setting addon default-storageclass=true in "newest-cni-858719"
	W1207 23:36:38.321142  677704 addons.go:248] addon default-storageclass should already be in state true
	I1207 23:36:38.321167  677704 host.go:66] Checking if "newest-cni-858719" exists ...
	I1207 23:36:38.321502  677704 cli_runner.go:164] Run: docker container inspect newest-cni-858719 --format={{.State.Status}}
	I1207 23:36:38.331788  677704 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1207 23:36:38.331860  677704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:36:38.331806  677704 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 23:36:38.339675  677704 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:36:38.339781  677704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 23:36:38.339832  677704 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1207 23:36:35.058792  673247 pod_ready.go:104] pod "coredns-66bc5c9577-wvgqf" is not "Ready", error: <nil>
	W1207 23:36:37.059360  673247 pod_ready.go:104] pod "coredns-66bc5c9577-wvgqf" is not "Ready", error: <nil>
	W1207 23:36:39.558825  673247 pod_ready.go:104] pod "coredns-66bc5c9577-wvgqf" is not "Ready", error: <nil>
	I1207 23:36:38.339851  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:38.340452  677704 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 23:36:38.340471  677704 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 23:36:38.340521  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:38.362068  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:38.362162  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:38.362941  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1207 23:36:38.362965  677704 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1207 23:36:38.363025  677704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-858719
	I1207 23:36:38.392574  677704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/newest-cni-858719/id_rsa Username:docker}
	I1207 23:36:38.462983  677704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:36:38.484595  677704 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:36:38.484756  677704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:36:38.486717  677704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 23:36:38.491481  677704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:36:38.510448  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1207 23:36:38.510515  677704 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1207 23:36:38.536570  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1207 23:36:38.536602  677704 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1207 23:36:38.566084  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1207 23:36:38.566115  677704 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1207 23:36:38.600942  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1207 23:36:38.600972  677704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1207 23:36:38.609165  677704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1207 23:36:38.609215  677704 retry.go:31] will retry after 211.51386ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1207 23:36:38.609284  677704 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1207 23:36:38.609300  677704 retry.go:31] will retry after 303.789465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1207 23:36:38.623815  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1207 23:36:38.624079  677704 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1207 23:36:38.653443  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1207 23:36:38.653478  677704 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1207 23:36:38.678913  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1207 23:36:38.678945  677704 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1207 23:36:38.701578  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1207 23:36:38.701607  677704 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1207 23:36:38.720445  677704 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1207 23:36:38.720502  677704 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1207 23:36:38.743195  677704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1207 23:36:38.821620  677704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1207 23:36:38.913583  677704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:36:38.985710  677704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:36:41.415564  677704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.593900667s)
	I1207 23:36:41.416832  677704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.673583567s)
	I1207 23:36:41.418467  677704 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-858719 addons enable metrics-server
	
	I1207 23:36:41.532720  677704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.619029892s)
	I1207 23:36:41.533073  677704 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.547330372s)
	I1207 23:36:41.533100  677704 api_server.go:72] duration metric: took 3.236908876s to wait for apiserver process to appear ...
	I1207 23:36:41.533107  677704 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:36:41.533129  677704 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:36:41.534688  677704 out.go:179] * Enabled addons: dashboard, default-storageclass, storage-provisioner
	I1207 23:36:41.535780  677704 addons.go:530] duration metric: took 3.239558186s for enable addons: enabled=[dashboard default-storageclass storage-provisioner]
	I1207 23:36:41.541555  677704 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1207 23:36:41.541584  677704 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1207 23:36:42.033193  677704 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:36:42.038840  677704 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1207 23:36:42.040044  677704 api_server.go:141] control plane version: v1.35.0-beta.0
	I1207 23:36:42.040086  677704 api_server.go:131] duration metric: took 506.968227ms to wait for apiserver health ...
	I1207 23:36:42.040100  677704 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:36:42.044016  677704 system_pods.go:59] 8 kube-system pods found
	I1207 23:36:42.044061  677704 system_pods.go:61] "coredns-7d764666f9-dp6qz" [1403dc21-d613-4225-bf80-faf8d23e774c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1207 23:36:42.044076  677704 system_pods.go:61] "etcd-newest-cni-858719" [58c61faa-719b-477c-8216-d9aaa8554cec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:36:42.044091  677704 system_pods.go:61] "kindnet-5zzk9" [b8e05261-d743-488e-9543-b60973ff09b4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1207 23:36:42.044103  677704 system_pods.go:61] "kube-apiserver-newest-cni-858719" [343d3191-d091-4436-a131-68718cb68508] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:36:42.044116  677704 system_pods.go:61] "kube-controller-manager-newest-cni-858719" [c2876dc8-1228-4980-bd43-1d58fcd760f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:36:42.044131  677704 system_pods.go:61] "kube-proxy-p8v8n" [494a11f1-086c-43f3-92e7-4b59d073c5f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 23:36:42.044143  677704 system_pods.go:61] "kube-scheduler-newest-cni-858719" [28d72586-76c3-4f37-b20e-0c7de9fe90ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:36:42.044153  677704 system_pods.go:61] "storage-provisioner" [a39abdef-8c48-494a-9bb1-645330622d99] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1207 23:36:42.044176  677704 system_pods.go:74] duration metric: took 4.066756ms to wait for pod list to return data ...
	I1207 23:36:42.044190  677704 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:36:42.047787  677704 default_sa.go:45] found service account: "default"
	I1207 23:36:42.047814  677704 default_sa.go:55] duration metric: took 3.616282ms for default service account to be created ...
	I1207 23:36:42.047828  677704 kubeadm.go:587] duration metric: took 3.751636263s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1207 23:36:42.047853  677704 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:36:42.051921  677704 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:36:42.051998  677704 node_conditions.go:123] node cpu capacity is 8
	I1207 23:36:42.052034  677704 node_conditions.go:105] duration metric: took 4.174035ms to run NodePressure ...
	I1207 23:36:42.052060  677704 start.go:242] waiting for startup goroutines ...
	I1207 23:36:42.052081  677704 start.go:247] waiting for cluster config update ...
	I1207 23:36:42.052105  677704 start.go:256] writing updated cluster config ...
	I1207 23:36:42.052449  677704 ssh_runner.go:195] Run: rm -f paused
	I1207 23:36:42.126816  677704 start.go:625] kubectl: 1.34.2, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1207 23:36:42.128432  677704 out.go:179] * Done! kubectl is now configured to use "newest-cni-858719" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 07 23:36:30 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:30.833082843Z" level=info msg="Starting container: d67e9dd816e1de9e505bb736a4b10bf308a020ba5c848918c6aff2c9eb803a10" id=f1171f26-349e-48cb-99c1-77eb00eff489 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:36:30 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:30.836057989Z" level=info msg="Started container" PID=1869 containerID=d67e9dd816e1de9e505bb736a4b10bf308a020ba5c848918c6aff2c9eb803a10 description=kube-system/coredns-66bc5c9577-p4v2f/coredns id=f1171f26-349e-48cb-99c1-77eb00eff489 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cb2a0bc6e8246478bfe219e0c3a8937393586fc8ccdf8e0bf344646d811dee2e
	Dec 07 23:36:34 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:34.041887883Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c5974b29-6209-4054-a579-2ee95a6c1232 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 23:36:34 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:34.041993388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:34 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:34.04822318Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bbd7209bcf01eeec6e797c3091a74a99f1c6bddd05c98f17be678a76bea9f6d9 UID:202a50b4-b2e7-4b74-a299-5f38dd0bd9c5 NetNS:/var/run/netns/2e947b7d-c817-4784-988e-56595ea0db66 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000398930}] Aliases:map[]}"
	Dec 07 23:36:34 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:34.048254681Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 07 23:36:34 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:34.059846767Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bbd7209bcf01eeec6e797c3091a74a99f1c6bddd05c98f17be678a76bea9f6d9 UID:202a50b4-b2e7-4b74-a299-5f38dd0bd9c5 NetNS:/var/run/netns/2e947b7d-c817-4784-988e-56595ea0db66 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000398930}] Aliases:map[]}"
	Dec 07 23:36:34 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:34.059972851Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 07 23:36:34 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:34.060928217Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 07 23:36:34 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:34.061688416Z" level=info msg="Ran pod sandbox bbd7209bcf01eeec6e797c3091a74a99f1c6bddd05c98f17be678a76bea9f6d9 with infra container: default/busybox/POD" id=c5974b29-6209-4054-a579-2ee95a6c1232 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 07 23:36:34 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:34.063009905Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1f13bf8b-6b06-4a09-9044-9163266d04be name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:36:34 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:34.063136591Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=1f13bf8b-6b06-4a09-9044-9163266d04be name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:36:34 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:34.063171932Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=1f13bf8b-6b06-4a09-9044-9163266d04be name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:36:34 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:34.063957248Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=23b03576-5a9a-4ed4-bb7b-19a5dfb66278 name=/runtime.v1.ImageService/PullImage
	Dec 07 23:36:34 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:34.06579178Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 07 23:36:36 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:36.141218977Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=23b03576-5a9a-4ed4-bb7b-19a5dfb66278 name=/runtime.v1.ImageService/PullImage
	Dec 07 23:36:36 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:36.142056887Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=44390a04-4a74-4fad-ba63-420dbb44dcaa name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:36:36 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:36.143935767Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ed195475-4c8c-4c14-95bc-7e5de3c6fd4d name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:36:36 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:36.147724901Z" level=info msg="Creating container: default/busybox/busybox" id=67a43aa4-98d2-409f-aa38-af93475a12e7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:36:36 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:36.147900495Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:36 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:36.153021172Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:36 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:36.153660276Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:36:36 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:36.194977874Z" level=info msg="Created container 453dc9910e4fd33f890004a12e6d571d02cca9307ccc69e8acf0be057776fa4b: default/busybox/busybox" id=67a43aa4-98d2-409f-aa38-af93475a12e7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:36:36 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:36.196319085Z" level=info msg="Starting container: 453dc9910e4fd33f890004a12e6d571d02cca9307ccc69e8acf0be057776fa4b" id=efa8dc74-49ab-4314-9741-e47ca30e8910 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:36:36 default-k8s-diff-port-312944 crio[772]: time="2025-12-07T23:36:36.199085257Z" level=info msg="Started container" PID=1945 containerID=453dc9910e4fd33f890004a12e6d571d02cca9307ccc69e8acf0be057776fa4b description=default/busybox/busybox id=efa8dc74-49ab-4314-9741-e47ca30e8910 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bbd7209bcf01eeec6e797c3091a74a99f1c6bddd05c98f17be678a76bea9f6d9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	453dc9910e4fd       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   bbd7209bcf01e       busybox                                                default
	d67e9dd816e1d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      14 seconds ago      Running             coredns                   0                   cb2a0bc6e8246       coredns-66bc5c9577-p4v2f                               kube-system
	1ca10a58abc19       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago      Running             storage-provisioner       0                   49b11c2e57ab5       storage-provisioner                                    kube-system
	217cdca085e70       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      25 seconds ago      Running             kube-proxy                0                   0474d303cc99f       kube-proxy-7stg5                                       kube-system
	60d52feb27d62       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      25 seconds ago      Running             kindnet-cni               0                   bd425498255cc       kindnet-55xbl                                          kube-system
	5b14fb17e7e15       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      35 seconds ago      Running             kube-scheduler            0                   92ed7e78282b5       kube-scheduler-default-k8s-diff-port-312944            kube-system
	541d03cfeb073       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      35 seconds ago      Running             etcd                      0                   33637afa96afc       etcd-default-k8s-diff-port-312944                      kube-system
	c93edfcf871bd       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      35 seconds ago      Running             kube-controller-manager   0                   5390f3b6072c0       kube-controller-manager-default-k8s-diff-port-312944   kube-system
	495cc12fb33a8       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      35 seconds ago      Running             kube-apiserver            0                   ec52ede770f93       kube-apiserver-default-k8s-diff-port-312944            kube-system
	
	
	==> coredns [d67e9dd816e1de9e505bb736a4b10bf308a020ba5c848918c6aff2c9eb803a10] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49126 - 51349 "HINFO IN 2489811623403037387.9121377790985326118. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02999436s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-312944
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-312944
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=default-k8s-diff-port-312944
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_36_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:36:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-312944
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:36:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:36:44 +0000   Sun, 07 Dec 2025 23:36:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:36:44 +0000   Sun, 07 Dec 2025 23:36:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:36:44 +0000   Sun, 07 Dec 2025 23:36:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:36:44 +0000   Sun, 07 Dec 2025 23:36:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-312944
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                bd0038bf-5fca-4fcf-bfc4-04aff0b70aa3
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-p4v2f                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-default-k8s-diff-port-312944                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-55xbl                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-312944             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-312944    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-7stg5                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-312944             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasSufficientPID
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s                kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s                kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s                kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node default-k8s-diff-port-312944 event: Registered Node default-k8s-diff-port-312944 in Controller
	  Normal  NodeReady                15s                kubelet          Node default-k8s-diff-port-312944 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.006319] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.495443] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006323] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494714] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006745] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494455] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007157] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493953] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007413] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493695] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007143] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493798] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007702] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493076] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008458] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493060] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008891] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492811] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007996] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493243] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008588] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492559] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008931] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.491699] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.010378] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	
	
	==> etcd [541d03cfeb073e82be85824627ec80a3f07610b900890945c0b600b6320a49c3] <==
	{"level":"warn","ts":"2025-12-07T23:36:10.417113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.428412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.441700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.450228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.461091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.471557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.481022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.491375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.500297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.527422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.533307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.542944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.558479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.569710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.578568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.587620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.596704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.604842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.615645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.625445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.635928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.644668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.659390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.668498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:10.675970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46530","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:36:45 up  2:19,  0 user,  load average: 4.73, 2.89, 2.07
	Linux default-k8s-diff-port-312944 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [60d52feb27d62eacaf098bc8fd9707e28e23a082ee4acda29d3f2d5eddc8be89] <==
	I1207 23:36:19.937124       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 23:36:19.937424       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1207 23:36:19.937596       1 main.go:148] setting mtu 1500 for CNI 
	I1207 23:36:19.937835       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 23:36:19.938021       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T23:36:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 23:36:20.236394       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 23:36:20.236437       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 23:36:20.236448       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 23:36:20.236761       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 23:36:20.635707       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 23:36:20.635749       1 metrics.go:72] Registering metrics
	I1207 23:36:20.635828       1 controller.go:711] "Syncing nftables rules"
	I1207 23:36:30.237860       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:36:30.237992       1 main.go:301] handling current node
	I1207 23:36:40.238177       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:36:40.238225       1 main.go:301] handling current node
	
	
	==> kube-apiserver [495cc12fb33a810ed1ef7fdfdf292c5813aa67b1b891d62c3922208b0a100f11] <==
	I1207 23:36:11.244830       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:36:11.245086       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1207 23:36:11.245691       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1207 23:36:11.251801       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1207 23:36:11.251895       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1207 23:36:11.252153       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:36:11.273185       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 23:36:12.146431       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1207 23:36:12.153320       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1207 23:36:12.153359       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1207 23:36:12.795472       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 23:36:12.839796       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 23:36:12.951358       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1207 23:36:12.958431       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1207 23:36:12.959766       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 23:36:12.964424       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:36:13.187077       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 23:36:13.848222       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 23:36:13.860594       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1207 23:36:13.873897       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1207 23:36:18.991163       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 23:36:19.145041       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:36:19.150557       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:36:19.238735       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1207 23:36:43.841383       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8444->192.168.94.1:39468: use of closed network connection
	
	
	==> kube-controller-manager [c93edfcf871bd8a6c534b1c49d7735791f451e8a4e024643671edb961103e4e2] <==
	I1207 23:36:18.187118       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1207 23:36:18.188508       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1207 23:36:18.188897       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1207 23:36:18.189067       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1207 23:36:18.189565       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1207 23:36:18.190002       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1207 23:36:18.190280       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1207 23:36:18.190402       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1207 23:36:18.190759       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1207 23:36:18.190787       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1207 23:36:18.190872       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1207 23:36:18.190927       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1207 23:36:18.191006       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1207 23:36:18.191076       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1207 23:36:18.191108       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1207 23:36:18.191134       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1207 23:36:18.193750       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 23:36:18.195005       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1207 23:36:18.198842       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 23:36:18.205529       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1207 23:36:18.205868       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-312944" podCIDRs=["10.244.0.0/24"]
	I1207 23:36:18.212756       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1207 23:36:18.212861       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1207 23:36:18.220586       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1207 23:36:33.145792       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [217cdca085e70090bc196cde205457c4227e8c1c734a22a4a69bf6428cf384c2] <==
	I1207 23:36:19.768424       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:36:19.849845       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 23:36:19.950230       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 23:36:19.950287       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1207 23:36:19.950417       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:36:19.976837       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:36:19.976891       1 server_linux.go:132] "Using iptables Proxier"
	I1207 23:36:19.983077       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:36:19.983569       1 server.go:527] "Version info" version="v1.34.2"
	I1207 23:36:19.983610       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:36:19.985231       1 config.go:200] "Starting service config controller"
	I1207 23:36:19.985250       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:36:19.985273       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:36:19.985278       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:36:19.985292       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:36:19.985296       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:36:19.985607       1 config.go:309] "Starting node config controller"
	I1207 23:36:19.985635       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:36:20.085761       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 23:36:20.085811       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 23:36:20.085843       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 23:36:20.086196       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [5b14fb17e7e157fd52ae47f45cdb324e533f2cf4df1368f45c6467d891e07993] <==
	E1207 23:36:11.208676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1207 23:36:11.209169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1207 23:36:11.209169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1207 23:36:11.209174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1207 23:36:11.209176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1207 23:36:11.209240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 23:36:11.209256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1207 23:36:11.209355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1207 23:36:11.209660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1207 23:36:11.210000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1207 23:36:11.210708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1207 23:36:11.210817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 23:36:12.147854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1207 23:36:12.188514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1207 23:36:12.222648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1207 23:36:12.274512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1207 23:36:12.277735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1207 23:36:12.282874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1207 23:36:12.294232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1207 23:36:12.314888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1207 23:36:12.324148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1207 23:36:12.336459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1207 23:36:12.461175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1207 23:36:12.562744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1207 23:36:14.404535       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 07 23:36:14 default-k8s-diff-port-312944 kubelet[1345]: E1207 23:36:14.744441    1345 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-312944\" already exists" pod="kube-system/etcd-default-k8s-diff-port-312944"
	Dec 07 23:36:14 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:14.776095    1345 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-312944" podStartSLOduration=1.776072264 podStartE2EDuration="1.776072264s" podCreationTimestamp="2025-12-07 23:36:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:36:14.763536321 +0000 UTC m=+1.165493158" watchObservedRunningTime="2025-12-07 23:36:14.776072264 +0000 UTC m=+1.178029110"
	Dec 07 23:36:14 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:14.776263    1345 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-312944" podStartSLOduration=1.776248057 podStartE2EDuration="1.776248057s" podCreationTimestamp="2025-12-07 23:36:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:36:14.776010438 +0000 UTC m=+1.177967284" watchObservedRunningTime="2025-12-07 23:36:14.776248057 +0000 UTC m=+1.178204915"
	Dec 07 23:36:14 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:14.802895    1345 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-312944" podStartSLOduration=1.802869379 podStartE2EDuration="1.802869379s" podCreationTimestamp="2025-12-07 23:36:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:36:14.789675221 +0000 UTC m=+1.191632069" watchObservedRunningTime="2025-12-07 23:36:14.802869379 +0000 UTC m=+1.204826225"
	Dec 07 23:36:14 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:14.803163    1345 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-312944" podStartSLOduration=1.803149992 podStartE2EDuration="1.803149992s" podCreationTimestamp="2025-12-07 23:36:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:36:14.803008472 +0000 UTC m=+1.204965318" watchObservedRunningTime="2025-12-07 23:36:14.803149992 +0000 UTC m=+1.205106838"
	Dec 07 23:36:18 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:18.254868    1345 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 07 23:36:18 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:18.255800    1345 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 07 23:36:19 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:19.324265    1345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/627ffd8d-a2eb-4d9c-b1bc-a71f609273bc-xtables-lock\") pod \"kindnet-55xbl\" (UID: \"627ffd8d-a2eb-4d9c-b1bc-a71f609273bc\") " pod="kube-system/kindnet-55xbl"
	Dec 07 23:36:19 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:19.324368    1345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/627ffd8d-a2eb-4d9c-b1bc-a71f609273bc-lib-modules\") pod \"kindnet-55xbl\" (UID: \"627ffd8d-a2eb-4d9c-b1bc-a71f609273bc\") " pod="kube-system/kindnet-55xbl"
	Dec 07 23:36:19 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:19.324395    1345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7e00d0a-bd16-45c1-a58c-e0569a0bcb33-xtables-lock\") pod \"kube-proxy-7stg5\" (UID: \"b7e00d0a-bd16-45c1-a58c-e0569a0bcb33\") " pod="kube-system/kube-proxy-7stg5"
	Dec 07 23:36:19 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:19.324420    1345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7e00d0a-bd16-45c1-a58c-e0569a0bcb33-lib-modules\") pod \"kube-proxy-7stg5\" (UID: \"b7e00d0a-bd16-45c1-a58c-e0569a0bcb33\") " pod="kube-system/kube-proxy-7stg5"
	Dec 07 23:36:19 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:19.324445    1345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq78h\" (UniqueName: \"kubernetes.io/projected/b7e00d0a-bd16-45c1-a58c-e0569a0bcb33-kube-api-access-lq78h\") pod \"kube-proxy-7stg5\" (UID: \"b7e00d0a-bd16-45c1-a58c-e0569a0bcb33\") " pod="kube-system/kube-proxy-7stg5"
	Dec 07 23:36:19 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:19.324477    1345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxwmp\" (UniqueName: \"kubernetes.io/projected/627ffd8d-a2eb-4d9c-b1bc-a71f609273bc-kube-api-access-vxwmp\") pod \"kindnet-55xbl\" (UID: \"627ffd8d-a2eb-4d9c-b1bc-a71f609273bc\") " pod="kube-system/kindnet-55xbl"
	Dec 07 23:36:19 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:19.324497    1345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b7e00d0a-bd16-45c1-a58c-e0569a0bcb33-kube-proxy\") pod \"kube-proxy-7stg5\" (UID: \"b7e00d0a-bd16-45c1-a58c-e0569a0bcb33\") " pod="kube-system/kube-proxy-7stg5"
	Dec 07 23:36:19 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:19.324611    1345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/627ffd8d-a2eb-4d9c-b1bc-a71f609273bc-cni-cfg\") pod \"kindnet-55xbl\" (UID: \"627ffd8d-a2eb-4d9c-b1bc-a71f609273bc\") " pod="kube-system/kindnet-55xbl"
	Dec 07 23:36:19 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:19.767147    1345 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7stg5" podStartSLOduration=0.767124618 podStartE2EDuration="767.124618ms" podCreationTimestamp="2025-12-07 23:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:36:19.766841187 +0000 UTC m=+6.168798037" watchObservedRunningTime="2025-12-07 23:36:19.767124618 +0000 UTC m=+6.169081465"
	Dec 07 23:36:19 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:19.783164    1345 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-55xbl" podStartSLOduration=0.783140996 podStartE2EDuration="783.140996ms" podCreationTimestamp="2025-12-07 23:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:36:19.782939562 +0000 UTC m=+6.184896409" watchObservedRunningTime="2025-12-07 23:36:19.783140996 +0000 UTC m=+6.185097844"
	Dec 07 23:36:30 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:30.410117    1345 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 07 23:36:30 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:30.510891    1345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx6cc\" (UniqueName: \"kubernetes.io/projected/adffbdc2-708d-4f45-9f91-1697332156e3-kube-api-access-cx6cc\") pod \"storage-provisioner\" (UID: \"adffbdc2-708d-4f45-9f91-1697332156e3\") " pod="kube-system/storage-provisioner"
	Dec 07 23:36:30 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:30.511159    1345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/113d6978-708b-4941-acbc-0fa4a639f318-config-volume\") pod \"coredns-66bc5c9577-p4v2f\" (UID: \"113d6978-708b-4941-acbc-0fa4a639f318\") " pod="kube-system/coredns-66bc5c9577-p4v2f"
	Dec 07 23:36:30 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:30.511268    1345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68gvx\" (UniqueName: \"kubernetes.io/projected/113d6978-708b-4941-acbc-0fa4a639f318-kube-api-access-68gvx\") pod \"coredns-66bc5c9577-p4v2f\" (UID: \"113d6978-708b-4941-acbc-0fa4a639f318\") " pod="kube-system/coredns-66bc5c9577-p4v2f"
	Dec 07 23:36:30 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:30.511575    1345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/adffbdc2-708d-4f45-9f91-1697332156e3-tmp\") pod \"storage-provisioner\" (UID: \"adffbdc2-708d-4f45-9f91-1697332156e3\") " pod="kube-system/storage-provisioner"
	Dec 07 23:36:31 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:31.799744    1345 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-p4v2f" podStartSLOduration=12.79972135 podStartE2EDuration="12.79972135s" podCreationTimestamp="2025-12-07 23:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:36:31.799521501 +0000 UTC m=+18.201478347" watchObservedRunningTime="2025-12-07 23:36:31.79972135 +0000 UTC m=+18.201678195"
	Dec 07 23:36:33 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:33.732648    1345 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.732618011 podStartE2EDuration="13.732618011s" podCreationTimestamp="2025-12-07 23:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-07 23:36:31.821637576 +0000 UTC m=+18.223594422" watchObservedRunningTime="2025-12-07 23:36:33.732618011 +0000 UTC m=+20.134574856"
	Dec 07 23:36:33 default-k8s-diff-port-312944 kubelet[1345]: I1207 23:36:33.835864    1345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfzrd\" (UniqueName: \"kubernetes.io/projected/202a50b4-b2e7-4b74-a299-5f38dd0bd9c5-kube-api-access-tfzrd\") pod \"busybox\" (UID: \"202a50b4-b2e7-4b74-a299-5f38dd0bd9c5\") " pod="default/busybox"
	
	
	==> storage-provisioner [1ca10a58abc19890846aec91c3be34d04ab9510fcf3187fc5fba2760a8d23559] <==
	I1207 23:36:30.844243       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 23:36:30.857526       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 23:36:30.857596       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1207 23:36:30.861830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:30.872548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 23:36:30.872963       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 23:36:30.873588       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-312944_112561e8-f64a-4a71-9c41-93ae8bde8a3e!
	I1207 23:36:30.874366       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8139ddd6-5276-4d69-8ef0-8cf0f6816009", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-312944_112561e8-f64a-4a71-9c41-93ae8bde8a3e became leader
	W1207 23:36:30.884970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:30.891544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 23:36:30.977558       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-312944_112561e8-f64a-4a71-9c41-93ae8bde8a3e!
	W1207 23:36:32.895268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:32.900885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:34.904066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:34.908286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:36.912475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:36.918775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:38.924538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:38.932214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:40.937075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:41.030270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:43.034265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:43.039406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:45.042708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:36:45.047984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-312944 -n default-k8s-diff-port-312944
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-312944 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-654118 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-654118 --alsologtostderr -v=1: exit status 80 (2.452063997s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-654118 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:37:25.361619  692292 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:37:25.361892  692292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:37:25.361903  692292 out.go:374] Setting ErrFile to fd 2...
	I1207 23:37:25.361908  692292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:37:25.362093  692292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:37:25.362404  692292 out.go:368] Setting JSON to false
	I1207 23:37:25.362422  692292 mustload.go:66] Loading cluster: embed-certs-654118
	I1207 23:37:25.362785  692292 config.go:182] Loaded profile config "embed-certs-654118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:37:25.363179  692292 cli_runner.go:164] Run: docker container inspect embed-certs-654118 --format={{.State.Status}}
	I1207 23:37:25.382288  692292 host.go:66] Checking if "embed-certs-654118" exists ...
	I1207 23:37:25.382607  692292 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:37:25.444814  692292 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-07 23:37:25.43421094 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:37:25.445612  692292 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-654118 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1207 23:37:25.447602  692292 out.go:179] * Pausing node embed-certs-654118 ... 
	I1207 23:37:25.448944  692292 host.go:66] Checking if "embed-certs-654118" exists ...
	I1207 23:37:25.449206  692292 ssh_runner.go:195] Run: systemctl --version
	I1207 23:37:25.449244  692292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-654118
	I1207 23:37:25.470049  692292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/embed-certs-654118/id_rsa Username:docker}
	I1207 23:37:25.569563  692292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:37:25.584755  692292 pause.go:52] kubelet running: true
	I1207 23:37:25.584826  692292 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1207 23:37:25.764147  692292 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1207 23:37:25.764232  692292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1207 23:37:25.848679  692292 cri.go:89] found id: "a230f8e09c8a793d24bc930a0fb7c9e8f555725f765382beb79ac8621a4e3455"
	I1207 23:37:25.848704  692292 cri.go:89] found id: "a6c98c6dc2249ec043cc985ad99b2be276e7fb077b56a646b774572f9b0e43e9"
	I1207 23:37:25.848717  692292 cri.go:89] found id: "fa59387c3b4d4bfd483cee16a4f633f23a1c3789f8c37f1fa4f4d2b9c9a3ed6a"
	I1207 23:37:25.848722  692292 cri.go:89] found id: "64270ee075317594cd8574f52acb74ad205fd052a7c4a7a070e7c82ad1a83c22"
	I1207 23:37:25.848727  692292 cri.go:89] found id: "9e595ec0ec0a2a4f455100334da2b7bc91d7b90dbc422aa9f96b4bfcbd14e784"
	I1207 23:37:25.848733  692292 cri.go:89] found id: "55f614a7d89079ce6b0150051faf8399dea9fe3ee0db5301b1f6eb9811f274fb"
	I1207 23:37:25.848737  692292 cri.go:89] found id: "de2a8fefd04073ed27eff698be1e31a40e77a0d4e91f60687ad522521cb5f30a"
	I1207 23:37:25.848741  692292 cri.go:89] found id: "63dcc5abcffa72045b4ce0dfe82b7bff6403005be06354ce602e9140d0e7be08"
	I1207 23:37:25.848745  692292 cri.go:89] found id: "1c04ccfa6ad08a37efa73abd2f81a78cc8ab1e12cae0f419d99b512bde0a19c0"
	I1207 23:37:25.848758  692292 cri.go:89] found id: "977e8fafdf74218cf51fae0fe63b18398a1e392fd9aca04d48a77e94825c5eb1"
	I1207 23:37:25.848762  692292 cri.go:89] found id: "fbf4535fa292992611e22cc68e13a796e2e4470d6418b306a556048000c2c4a4"
	I1207 23:37:25.848766  692292 cri.go:89] found id: ""
	I1207 23:37:25.848825  692292 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 23:37:25.862171  692292 retry.go:31] will retry after 189.491757ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:37:25Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:37:26.052601  692292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:37:26.066122  692292 pause.go:52] kubelet running: false
	I1207 23:37:26.066196  692292 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1207 23:37:26.236867  692292 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1207 23:37:26.236963  692292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1207 23:37:26.317776  692292 cri.go:89] found id: "a230f8e09c8a793d24bc930a0fb7c9e8f555725f765382beb79ac8621a4e3455"
	I1207 23:37:26.317803  692292 cri.go:89] found id: "a6c98c6dc2249ec043cc985ad99b2be276e7fb077b56a646b774572f9b0e43e9"
	I1207 23:37:26.317810  692292 cri.go:89] found id: "fa59387c3b4d4bfd483cee16a4f633f23a1c3789f8c37f1fa4f4d2b9c9a3ed6a"
	I1207 23:37:26.317815  692292 cri.go:89] found id: "64270ee075317594cd8574f52acb74ad205fd052a7c4a7a070e7c82ad1a83c22"
	I1207 23:37:26.317819  692292 cri.go:89] found id: "9e595ec0ec0a2a4f455100334da2b7bc91d7b90dbc422aa9f96b4bfcbd14e784"
	I1207 23:37:26.317824  692292 cri.go:89] found id: "55f614a7d89079ce6b0150051faf8399dea9fe3ee0db5301b1f6eb9811f274fb"
	I1207 23:37:26.317829  692292 cri.go:89] found id: "de2a8fefd04073ed27eff698be1e31a40e77a0d4e91f60687ad522521cb5f30a"
	I1207 23:37:26.317833  692292 cri.go:89] found id: "63dcc5abcffa72045b4ce0dfe82b7bff6403005be06354ce602e9140d0e7be08"
	I1207 23:37:26.317837  692292 cri.go:89] found id: "1c04ccfa6ad08a37efa73abd2f81a78cc8ab1e12cae0f419d99b512bde0a19c0"
	I1207 23:37:26.317845  692292 cri.go:89] found id: "977e8fafdf74218cf51fae0fe63b18398a1e392fd9aca04d48a77e94825c5eb1"
	I1207 23:37:26.317849  692292 cri.go:89] found id: "fbf4535fa292992611e22cc68e13a796e2e4470d6418b306a556048000c2c4a4"
	I1207 23:37:26.317854  692292 cri.go:89] found id: ""
	I1207 23:37:26.317898  692292 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 23:37:26.331112  692292 retry.go:31] will retry after 495.478595ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:37:26Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:37:26.827428  692292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:37:26.843813  692292 pause.go:52] kubelet running: false
	I1207 23:37:26.843887  692292 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1207 23:37:27.010606  692292 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1207 23:37:27.010706  692292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1207 23:37:27.083077  692292 cri.go:89] found id: "a230f8e09c8a793d24bc930a0fb7c9e8f555725f765382beb79ac8621a4e3455"
	I1207 23:37:27.083115  692292 cri.go:89] found id: "a6c98c6dc2249ec043cc985ad99b2be276e7fb077b56a646b774572f9b0e43e9"
	I1207 23:37:27.083119  692292 cri.go:89] found id: "fa59387c3b4d4bfd483cee16a4f633f23a1c3789f8c37f1fa4f4d2b9c9a3ed6a"
	I1207 23:37:27.083122  692292 cri.go:89] found id: "64270ee075317594cd8574f52acb74ad205fd052a7c4a7a070e7c82ad1a83c22"
	I1207 23:37:27.083125  692292 cri.go:89] found id: "9e595ec0ec0a2a4f455100334da2b7bc91d7b90dbc422aa9f96b4bfcbd14e784"
	I1207 23:37:27.083129  692292 cri.go:89] found id: "55f614a7d89079ce6b0150051faf8399dea9fe3ee0db5301b1f6eb9811f274fb"
	I1207 23:37:27.083131  692292 cri.go:89] found id: "de2a8fefd04073ed27eff698be1e31a40e77a0d4e91f60687ad522521cb5f30a"
	I1207 23:37:27.083134  692292 cri.go:89] found id: "63dcc5abcffa72045b4ce0dfe82b7bff6403005be06354ce602e9140d0e7be08"
	I1207 23:37:27.083137  692292 cri.go:89] found id: "1c04ccfa6ad08a37efa73abd2f81a78cc8ab1e12cae0f419d99b512bde0a19c0"
	I1207 23:37:27.083148  692292 cri.go:89] found id: "977e8fafdf74218cf51fae0fe63b18398a1e392fd9aca04d48a77e94825c5eb1"
	I1207 23:37:27.083153  692292 cri.go:89] found id: "fbf4535fa292992611e22cc68e13a796e2e4470d6418b306a556048000c2c4a4"
	I1207 23:37:27.083155  692292 cri.go:89] found id: ""
	I1207 23:37:27.083192  692292 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 23:37:27.095385  692292 retry.go:31] will retry after 376.843085ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:37:27Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:37:27.472984  692292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:37:27.487371  692292 pause.go:52] kubelet running: false
	I1207 23:37:27.487423  692292 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1207 23:37:27.645395  692292 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1207 23:37:27.645487  692292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1207 23:37:27.716917  692292 cri.go:89] found id: "a230f8e09c8a793d24bc930a0fb7c9e8f555725f765382beb79ac8621a4e3455"
	I1207 23:37:27.716944  692292 cri.go:89] found id: "a6c98c6dc2249ec043cc985ad99b2be276e7fb077b56a646b774572f9b0e43e9"
	I1207 23:37:27.716950  692292 cri.go:89] found id: "fa59387c3b4d4bfd483cee16a4f633f23a1c3789f8c37f1fa4f4d2b9c9a3ed6a"
	I1207 23:37:27.716955  692292 cri.go:89] found id: "64270ee075317594cd8574f52acb74ad205fd052a7c4a7a070e7c82ad1a83c22"
	I1207 23:37:27.716960  692292 cri.go:89] found id: "9e595ec0ec0a2a4f455100334da2b7bc91d7b90dbc422aa9f96b4bfcbd14e784"
	I1207 23:37:27.716965  692292 cri.go:89] found id: "55f614a7d89079ce6b0150051faf8399dea9fe3ee0db5301b1f6eb9811f274fb"
	I1207 23:37:27.716969  692292 cri.go:89] found id: "de2a8fefd04073ed27eff698be1e31a40e77a0d4e91f60687ad522521cb5f30a"
	I1207 23:37:27.716973  692292 cri.go:89] found id: "63dcc5abcffa72045b4ce0dfe82b7bff6403005be06354ce602e9140d0e7be08"
	I1207 23:37:27.716977  692292 cri.go:89] found id: "1c04ccfa6ad08a37efa73abd2f81a78cc8ab1e12cae0f419d99b512bde0a19c0"
	I1207 23:37:27.716985  692292 cri.go:89] found id: "977e8fafdf74218cf51fae0fe63b18398a1e392fd9aca04d48a77e94825c5eb1"
	I1207 23:37:27.716990  692292 cri.go:89] found id: "fbf4535fa292992611e22cc68e13a796e2e4470d6418b306a556048000c2c4a4"
	I1207 23:37:27.716993  692292 cri.go:89] found id: ""
	I1207 23:37:27.717057  692292 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 23:37:27.732277  692292 out.go:203] 
	W1207 23:37:27.733602  692292 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:37:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:37:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1207 23:37:27.733625  692292 out.go:285] * 
	* 
	W1207 23:37:27.740102  692292 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 23:37:27.741582  692292 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-654118 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-654118
helpers_test.go:243: (dbg) docker inspect embed-certs-654118:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c652041fdce083d2960416540159c52a229547c9c1d310673112a81f91cd7e06",
	        "Created": "2025-12-07T23:34:44.331761062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 673801,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:36:20.069725191Z",
	            "FinishedAt": "2025-12-07T23:36:18.346277135Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/c652041fdce083d2960416540159c52a229547c9c1d310673112a81f91cd7e06/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c652041fdce083d2960416540159c52a229547c9c1d310673112a81f91cd7e06/hostname",
	        "HostsPath": "/var/lib/docker/containers/c652041fdce083d2960416540159c52a229547c9c1d310673112a81f91cd7e06/hosts",
	        "LogPath": "/var/lib/docker/containers/c652041fdce083d2960416540159c52a229547c9c1d310673112a81f91cd7e06/c652041fdce083d2960416540159c52a229547c9c1d310673112a81f91cd7e06-json.log",
	        "Name": "/embed-certs-654118",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-654118:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-654118",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c652041fdce083d2960416540159c52a229547c9c1d310673112a81f91cd7e06",
	                "LowerDir": "/var/lib/docker/overlay2/b033e7e02e0290ed765f992d60e4a6dc2240c75ef7b2064b0c47febefaf70b5f-init/diff:/var/lib/docker/overlay2/d2e9c5481c0f5ed3745e4b3c85b207e8e3f273f5a1d285f7bc7bfa20976ad16e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b033e7e02e0290ed765f992d60e4a6dc2240c75ef7b2064b0c47febefaf70b5f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b033e7e02e0290ed765f992d60e4a6dc2240c75ef7b2064b0c47febefaf70b5f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b033e7e02e0290ed765f992d60e4a6dc2240c75ef7b2064b0c47febefaf70b5f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-654118",
	                "Source": "/var/lib/docker/volumes/embed-certs-654118/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-654118",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-654118",
	                "name.minikube.sigs.k8s.io": "embed-certs-654118",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "49e2d7ac6f0b7433403a9e02f76c19ccaeaa3e1676d41fb879ec5639a6b4e3f1",
	            "SandboxKey": "/var/run/docker/netns/49e2d7ac6f0b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-654118": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eae277504c57bb79a350439d5c756b806a60082b42083657979990253737dde6",
	                    "EndpointID": "8d2449fb69a58971a630c085fbf632f3315958f53c4f2268ff88adc8cda14cba",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "52:c4:16:61:28:af",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-654118",
	                        "c652041fdce0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-654118 -n embed-certs-654118
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-654118 -n embed-certs-654118: exit status 2 (354.553451ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-654118 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-654118 logs -n 25: (1.228482718s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-312944 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-312944 │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ start   │ -p default-k8s-diff-port-312944 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2 │ default-k8s-diff-port-312944 │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │                     │
	│ ssh     │ -p auto-600852 pgrep -a kubelet                                                                                                                                          │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo cat /etc/nsswitch.conf                                                                                                                               │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo cat /etc/hosts                                                                                                                                       │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo cat /etc/resolv.conf                                                                                                                                 │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo crictl pods                                                                                                                                          │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo crictl ps --all                                                                                                                                      │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                                               │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo ip a s                                                                                                                                               │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo ip r s                                                                                                                                               │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo iptables-save                                                                                                                                        │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo iptables -t nat -L -n -v                                                                                                                             │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ image   │ embed-certs-654118 image list --format=json                                                                                                                              │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo systemctl status kubelet --all --full --no-pager                                                                                                     │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ pause   │ -p embed-certs-654118 --alsologtostderr -v=1                                                                                                                             │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │                     │
	│ ssh     │ -p auto-600852 sudo systemctl cat kubelet --no-pager                                                                                                                     │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                      │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                     │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo cat /var/lib/kubelet/config.yaml                                                                                                                     │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo systemctl status docker --all --full --no-pager                                                                                                      │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │                     │
	│ ssh     │ -p auto-600852 sudo systemctl cat docker --no-pager                                                                                                                      │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo cat /etc/docker/daemon.json                                                                                                                          │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │                     │
	│ ssh     │ -p auto-600852 sudo docker system info                                                                                                                                   │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │                     │
	│ ssh     │ -p auto-600852 sudo systemctl status cri-docker --all --full --no-pager                                                                                                  │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:37:04
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:37:04.722045  687309 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:37:04.722146  687309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:37:04.722151  687309 out.go:374] Setting ErrFile to fd 2...
	I1207 23:37:04.722155  687309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:37:04.722416  687309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:37:04.722887  687309 out.go:368] Setting JSON to false
	I1207 23:37:04.724036  687309 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8369,"bootTime":1765142256,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:37:04.724104  687309 start.go:143] virtualization: kvm guest
	I1207 23:37:04.726136  687309 out.go:179] * [default-k8s-diff-port-312944] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:37:04.727393  687309 notify.go:221] Checking for updates...
	I1207 23:37:04.727408  687309 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:37:04.728657  687309 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:37:04.730027  687309 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:37:04.731379  687309 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:37:04.732624  687309 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:37:04.733762  687309 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:37:04.735574  687309 config.go:182] Loaded profile config "default-k8s-diff-port-312944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:37:04.736385  687309 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:37:04.761948  687309 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:37:04.762056  687309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:37:04.817188  687309 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-07 23:37:04.807477634 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:37:04.817291  687309 docker.go:319] overlay module found
	I1207 23:37:04.820120  687309 out.go:179] * Using the docker driver based on existing profile
	I1207 23:37:04.821288  687309 start.go:309] selected driver: docker
	I1207 23:37:04.821309  687309 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-312944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-312944 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:37:04.821413  687309 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:37:04.821985  687309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:37:04.885662  687309 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-07 23:37:04.874804599 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:37:04.885946  687309 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:37:04.885980  687309 cni.go:84] Creating CNI manager for ""
	I1207 23:37:04.886031  687309 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:37:04.886072  687309 start.go:353] cluster config:
	{Name:default-k8s-diff-port-312944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-312944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:37:04.887849  687309 out.go:179] * Starting "default-k8s-diff-port-312944" primary control-plane node in "default-k8s-diff-port-312944" cluster
	I1207 23:37:04.889015  687309 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:37:04.890364  687309 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:37:04.891508  687309 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:37:04.891547  687309 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1207 23:37:04.891558  687309 cache.go:65] Caching tarball of preloaded images
	I1207 23:37:04.891619  687309 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:37:04.891648  687309 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:37:04.891657  687309 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1207 23:37:04.891747  687309 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/config.json ...
	I1207 23:37:04.914740  687309 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:37:04.914773  687309 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:37:04.914795  687309 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:37:04.914831  687309 start.go:360] acquireMachinesLock for default-k8s-diff-port-312944: {Name:mk446704c0609871a6f2b287c350f0600ce374c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:37:04.914903  687309 start.go:364] duration metric: took 44.996µs to acquireMachinesLock for "default-k8s-diff-port-312944"
	I1207 23:37:04.914924  687309 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:37:04.914931  687309 fix.go:54] fixHost starting: 
	I1207 23:37:04.915230  687309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-312944 --format={{.State.Status}}
	I1207 23:37:04.933868  687309 fix.go:112] recreateIfNeeded on default-k8s-diff-port-312944: state=Stopped err=<nil>
	W1207 23:37:04.933902  687309 fix.go:138] unexpected machine state, will restart: <nil>
	W1207 23:37:00.738011  673565 node_ready.go:57] node "auto-600852" has "Ready":"False" status (will retry)
	I1207 23:37:02.736950  673565 node_ready.go:49] node "auto-600852" is "Ready"
	I1207 23:37:02.736980  673565 node_ready.go:38] duration metric: took 11.002778413s for node "auto-600852" to be "Ready" ...
	I1207 23:37:02.736997  673565 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:37:02.737066  673565 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:37:02.751037  673565 api_server.go:72] duration metric: took 11.345617446s to wait for apiserver process to appear ...
	I1207 23:37:02.751079  673565 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:37:02.751106  673565 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1207 23:37:02.755278  673565 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1207 23:37:02.756387  673565 api_server.go:141] control plane version: v1.34.2
	I1207 23:37:02.756412  673565 api_server.go:131] duration metric: took 5.325955ms to wait for apiserver health ...
	I1207 23:37:02.756420  673565 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:37:02.759924  673565 system_pods.go:59] 8 kube-system pods found
	I1207 23:37:02.759969  673565 system_pods.go:61] "coredns-66bc5c9577-cvkqs" [21e932cc-f500-4e42-a043-59494f1ef96c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:37:02.759978  673565 system_pods.go:61] "etcd-auto-600852" [dfb2cc27-d003-4c95-93c5-ee04651fbc56] Running
	I1207 23:37:02.759997  673565 system_pods.go:61] "kindnet-htd2n" [f0285656-53e9-4405-a905-6c8de6034470] Running
	I1207 23:37:02.760002  673565 system_pods.go:61] "kube-apiserver-auto-600852" [54fd7cf0-fe8c-44ce-bdc9-ea4d438cd061] Running
	I1207 23:37:02.760008  673565 system_pods.go:61] "kube-controller-manager-auto-600852" [45539d0e-185f-4c78-b238-0f776feb4bbb] Running
	I1207 23:37:02.760015  673565 system_pods.go:61] "kube-proxy-smqcr" [81c29963-801c-47a8-ba98-733d78c3b341] Running
	I1207 23:37:02.760020  673565 system_pods.go:61] "kube-scheduler-auto-600852" [f1899c61-58d6-4f1e-8568-a0c69337ce73] Running
	I1207 23:37:02.760030  673565 system_pods.go:61] "storage-provisioner" [eeed8067-2ea0-4f0b-b48f-bbfd0fed14a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:37:02.760038  673565 system_pods.go:74] duration metric: took 3.611353ms to wait for pod list to return data ...
	I1207 23:37:02.760049  673565 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:37:02.762530  673565 default_sa.go:45] found service account: "default"
	I1207 23:37:02.762563  673565 default_sa.go:55] duration metric: took 2.49853ms for default service account to be created ...
	I1207 23:37:02.762574  673565 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 23:37:02.765308  673565 system_pods.go:86] 8 kube-system pods found
	I1207 23:37:02.765366  673565 system_pods.go:89] "coredns-66bc5c9577-cvkqs" [21e932cc-f500-4e42-a043-59494f1ef96c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:37:02.765375  673565 system_pods.go:89] "etcd-auto-600852" [dfb2cc27-d003-4c95-93c5-ee04651fbc56] Running
	I1207 23:37:02.765383  673565 system_pods.go:89] "kindnet-htd2n" [f0285656-53e9-4405-a905-6c8de6034470] Running
	I1207 23:37:02.765388  673565 system_pods.go:89] "kube-apiserver-auto-600852" [54fd7cf0-fe8c-44ce-bdc9-ea4d438cd061] Running
	I1207 23:37:02.765394  673565 system_pods.go:89] "kube-controller-manager-auto-600852" [45539d0e-185f-4c78-b238-0f776feb4bbb] Running
	I1207 23:37:02.765404  673565 system_pods.go:89] "kube-proxy-smqcr" [81c29963-801c-47a8-ba98-733d78c3b341] Running
	I1207 23:37:02.765409  673565 system_pods.go:89] "kube-scheduler-auto-600852" [f1899c61-58d6-4f1e-8568-a0c69337ce73] Running
	I1207 23:37:02.765419  673565 system_pods.go:89] "storage-provisioner" [eeed8067-2ea0-4f0b-b48f-bbfd0fed14a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:37:02.765444  673565 retry.go:31] will retry after 204.835553ms: missing components: kube-dns
	I1207 23:37:02.976001  673565 system_pods.go:86] 8 kube-system pods found
	I1207 23:37:02.976063  673565 system_pods.go:89] "coredns-66bc5c9577-cvkqs" [21e932cc-f500-4e42-a043-59494f1ef96c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:37:02.976071  673565 system_pods.go:89] "etcd-auto-600852" [dfb2cc27-d003-4c95-93c5-ee04651fbc56] Running
	I1207 23:37:02.976086  673565 system_pods.go:89] "kindnet-htd2n" [f0285656-53e9-4405-a905-6c8de6034470] Running
	I1207 23:37:02.976092  673565 system_pods.go:89] "kube-apiserver-auto-600852" [54fd7cf0-fe8c-44ce-bdc9-ea4d438cd061] Running
	I1207 23:37:02.976105  673565 system_pods.go:89] "kube-controller-manager-auto-600852" [45539d0e-185f-4c78-b238-0f776feb4bbb] Running
	I1207 23:37:02.976111  673565 system_pods.go:89] "kube-proxy-smqcr" [81c29963-801c-47a8-ba98-733d78c3b341] Running
	I1207 23:37:02.976120  673565 system_pods.go:89] "kube-scheduler-auto-600852" [f1899c61-58d6-4f1e-8568-a0c69337ce73] Running
	I1207 23:37:02.976128  673565 system_pods.go:89] "storage-provisioner" [eeed8067-2ea0-4f0b-b48f-bbfd0fed14a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:37:02.976152  673565 retry.go:31] will retry after 367.780953ms: missing components: kube-dns
	I1207 23:37:03.347925  673565 system_pods.go:86] 8 kube-system pods found
	I1207 23:37:03.347975  673565 system_pods.go:89] "coredns-66bc5c9577-cvkqs" [21e932cc-f500-4e42-a043-59494f1ef96c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:37:03.347984  673565 system_pods.go:89] "etcd-auto-600852" [dfb2cc27-d003-4c95-93c5-ee04651fbc56] Running
	I1207 23:37:03.347991  673565 system_pods.go:89] "kindnet-htd2n" [f0285656-53e9-4405-a905-6c8de6034470] Running
	I1207 23:37:03.347996  673565 system_pods.go:89] "kube-apiserver-auto-600852" [54fd7cf0-fe8c-44ce-bdc9-ea4d438cd061] Running
	I1207 23:37:03.348002  673565 system_pods.go:89] "kube-controller-manager-auto-600852" [45539d0e-185f-4c78-b238-0f776feb4bbb] Running
	I1207 23:37:03.348014  673565 system_pods.go:89] "kube-proxy-smqcr" [81c29963-801c-47a8-ba98-733d78c3b341] Running
	I1207 23:37:03.348019  673565 system_pods.go:89] "kube-scheduler-auto-600852" [f1899c61-58d6-4f1e-8568-a0c69337ce73] Running
	I1207 23:37:03.348030  673565 system_pods.go:89] "storage-provisioner" [eeed8067-2ea0-4f0b-b48f-bbfd0fed14a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:37:03.348055  673565 retry.go:31] will retry after 327.949085ms: missing components: kube-dns
	I1207 23:37:03.680051  673565 system_pods.go:86] 8 kube-system pods found
	I1207 23:37:03.680084  673565 system_pods.go:89] "coredns-66bc5c9577-cvkqs" [21e932cc-f500-4e42-a043-59494f1ef96c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:37:03.680091  673565 system_pods.go:89] "etcd-auto-600852" [dfb2cc27-d003-4c95-93c5-ee04651fbc56] Running
	I1207 23:37:03.680097  673565 system_pods.go:89] "kindnet-htd2n" [f0285656-53e9-4405-a905-6c8de6034470] Running
	I1207 23:37:03.680100  673565 system_pods.go:89] "kube-apiserver-auto-600852" [54fd7cf0-fe8c-44ce-bdc9-ea4d438cd061] Running
	I1207 23:37:03.680104  673565 system_pods.go:89] "kube-controller-manager-auto-600852" [45539d0e-185f-4c78-b238-0f776feb4bbb] Running
	I1207 23:37:03.680107  673565 system_pods.go:89] "kube-proxy-smqcr" [81c29963-801c-47a8-ba98-733d78c3b341] Running
	I1207 23:37:03.680110  673565 system_pods.go:89] "kube-scheduler-auto-600852" [f1899c61-58d6-4f1e-8568-a0c69337ce73] Running
	I1207 23:37:03.680117  673565 system_pods.go:89] "storage-provisioner" [eeed8067-2ea0-4f0b-b48f-bbfd0fed14a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:37:03.680134  673565 retry.go:31] will retry after 591.359397ms: missing components: kube-dns
	I1207 23:37:04.276928  673565 system_pods.go:86] 8 kube-system pods found
	I1207 23:37:04.276964  673565 system_pods.go:89] "coredns-66bc5c9577-cvkqs" [21e932cc-f500-4e42-a043-59494f1ef96c] Running
	I1207 23:37:04.276974  673565 system_pods.go:89] "etcd-auto-600852" [dfb2cc27-d003-4c95-93c5-ee04651fbc56] Running
	I1207 23:37:04.276980  673565 system_pods.go:89] "kindnet-htd2n" [f0285656-53e9-4405-a905-6c8de6034470] Running
	I1207 23:37:04.276985  673565 system_pods.go:89] "kube-apiserver-auto-600852" [54fd7cf0-fe8c-44ce-bdc9-ea4d438cd061] Running
	I1207 23:37:04.276992  673565 system_pods.go:89] "kube-controller-manager-auto-600852" [45539d0e-185f-4c78-b238-0f776feb4bbb] Running
	I1207 23:37:04.277000  673565 system_pods.go:89] "kube-proxy-smqcr" [81c29963-801c-47a8-ba98-733d78c3b341] Running
	I1207 23:37:04.277005  673565 system_pods.go:89] "kube-scheduler-auto-600852" [f1899c61-58d6-4f1e-8568-a0c69337ce73] Running
	I1207 23:37:04.277010  673565 system_pods.go:89] "storage-provisioner" [eeed8067-2ea0-4f0b-b48f-bbfd0fed14a7] Running
	I1207 23:37:04.277020  673565 system_pods.go:126] duration metric: took 1.514439586s to wait for k8s-apps to be running ...
	I1207 23:37:04.277034  673565 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:37:04.277088  673565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:37:04.291684  673565 system_svc.go:56] duration metric: took 14.64018ms WaitForService to wait for kubelet
	I1207 23:37:04.291718  673565 kubeadm.go:587] duration metric: took 12.886306227s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:37:04.291743  673565 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:37:04.294886  673565 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:37:04.294923  673565 node_conditions.go:123] node cpu capacity is 8
	I1207 23:37:04.294946  673565 node_conditions.go:105] duration metric: took 3.196073ms to run NodePressure ...
	I1207 23:37:04.294964  673565 start.go:242] waiting for startup goroutines ...
	I1207 23:37:04.294980  673565 start.go:247] waiting for cluster config update ...
	I1207 23:37:04.295000  673565 start.go:256] writing updated cluster config ...
	I1207 23:37:04.295467  673565 ssh_runner.go:195] Run: rm -f paused
	I1207 23:37:04.299846  673565 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:37:04.304102  673565 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cvkqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:04.309006  673565 pod_ready.go:94] pod "coredns-66bc5c9577-cvkqs" is "Ready"
	I1207 23:37:04.309034  673565 pod_ready.go:86] duration metric: took 4.900761ms for pod "coredns-66bc5c9577-cvkqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:04.311296  673565 pod_ready.go:83] waiting for pod "etcd-auto-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:04.315648  673565 pod_ready.go:94] pod "etcd-auto-600852" is "Ready"
	I1207 23:37:04.315672  673565 pod_ready.go:86] duration metric: took 4.352934ms for pod "etcd-auto-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:04.317910  673565 pod_ready.go:83] waiting for pod "kube-apiserver-auto-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:04.322193  673565 pod_ready.go:94] pod "kube-apiserver-auto-600852" is "Ready"
	I1207 23:37:04.322221  673565 pod_ready.go:86] duration metric: took 4.280832ms for pod "kube-apiserver-auto-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:04.324442  673565 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:04.704268  673565 pod_ready.go:94] pod "kube-controller-manager-auto-600852" is "Ready"
	I1207 23:37:04.704299  673565 pod_ready.go:86] duration metric: took 379.836962ms for pod "kube-controller-manager-auto-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:04.904744  673565 pod_ready.go:83] waiting for pod "kube-proxy-smqcr" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:05.304822  673565 pod_ready.go:94] pod "kube-proxy-smqcr" is "Ready"
	I1207 23:37:05.304848  673565 pod_ready.go:86] duration metric: took 400.076972ms for pod "kube-proxy-smqcr" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:05.505043  673565 pod_ready.go:83] waiting for pod "kube-scheduler-auto-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:05.904580  673565 pod_ready.go:94] pod "kube-scheduler-auto-600852" is "Ready"
	I1207 23:37:05.904608  673565 pod_ready.go:86] duration metric: took 399.534639ms for pod "kube-scheduler-auto-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:05.904620  673565 pod_ready.go:40] duration metric: took 1.604737225s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:37:05.952021  673565 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1207 23:37:05.954352  673565 out.go:179] * Done! kubectl is now configured to use "auto-600852" cluster and "default" namespace by default
	W1207 23:37:05.068234  673247 pod_ready.go:104] pod "coredns-66bc5c9577-wvgqf" is not "Ready", error: <nil>
	W1207 23:37:07.559644  673247 pod_ready.go:104] pod "coredns-66bc5c9577-wvgqf" is not "Ready", error: <nil>
	I1207 23:37:04.935946  687309 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-312944" ...
	I1207 23:37:04.936017  687309 cli_runner.go:164] Run: docker start default-k8s-diff-port-312944
	I1207 23:37:05.208710  687309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-312944 --format={{.State.Status}}
	I1207 23:37:05.229083  687309 kic.go:430] container "default-k8s-diff-port-312944" state is running.
	I1207 23:37:05.229522  687309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-312944
	I1207 23:37:05.249428  687309 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/config.json ...
	I1207 23:37:05.249661  687309 machine.go:94] provisionDockerMachine start ...
	I1207 23:37:05.249725  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:05.269354  687309 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:05.269708  687309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1207 23:37:05.269727  687309 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:37:05.270396  687309 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37942->127.0.0.1:33483: read: connection reset by peer
	I1207 23:37:08.422504  687309 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-312944
	
	I1207 23:37:08.422536  687309 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-312944"
	I1207 23:37:08.422599  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:08.448957  687309 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:08.449395  687309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1207 23:37:08.449418  687309 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-312944 && echo "default-k8s-diff-port-312944" | sudo tee /etc/hostname
	I1207 23:37:08.605519  687309 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-312944
	
	I1207 23:37:08.605821  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:08.629868  687309 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:08.630212  687309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1207 23:37:08.630245  687309 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-312944' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-312944/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-312944' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:37:08.776693  687309 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:37:08.776724  687309 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:37:08.776748  687309 ubuntu.go:190] setting up certificates
	I1207 23:37:08.776760  687309 provision.go:84] configureAuth start
	I1207 23:37:08.776845  687309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-312944
	I1207 23:37:08.802259  687309 provision.go:143] copyHostCerts
	I1207 23:37:08.802351  687309 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:37:08.802363  687309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:37:08.802460  687309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:37:08.802621  687309 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:37:08.802637  687309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:37:08.802684  687309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:37:08.802819  687309 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:37:08.802832  687309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:37:08.803451  687309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:37:08.803609  687309 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-312944 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-312944 localhost minikube]
	I1207 23:37:08.924820  687309 provision.go:177] copyRemoteCerts
	I1207 23:37:08.924880  687309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:37:08.924914  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:08.943947  687309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:37:09.051100  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1207 23:37:09.084406  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 23:37:09.104116  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:37:09.122448  687309 provision.go:87] duration metric: took 345.672125ms to configureAuth
	I1207 23:37:09.122485  687309 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:37:09.122723  687309 config.go:182] Loaded profile config "default-k8s-diff-port-312944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:37:09.122898  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:09.152527  687309 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:09.152839  687309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1207 23:37:09.152875  687309 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:37:09.783862  687309 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:37:09.783893  687309 machine.go:97] duration metric: took 4.534215722s to provisionDockerMachine
	I1207 23:37:09.783906  687309 start.go:293] postStartSetup for "default-k8s-diff-port-312944" (driver="docker")
	I1207 23:37:09.783922  687309 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:37:09.784000  687309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:37:09.784050  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:09.804027  687309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:37:09.899254  687309 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:37:09.903022  687309 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:37:09.903046  687309 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:37:09.903058  687309 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:37:09.903108  687309 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:37:09.903182  687309 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:37:09.903269  687309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:37:09.911506  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:37:09.931221  687309 start.go:296] duration metric: took 147.295974ms for postStartSetup
	I1207 23:37:09.931387  687309 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:37:09.931476  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:09.950851  687309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:37:10.042778  687309 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:37:10.047507  687309 fix.go:56] duration metric: took 5.132570353s for fixHost
	I1207 23:37:10.047531  687309 start.go:83] releasing machines lock for "default-k8s-diff-port-312944", held for 5.132616614s
	I1207 23:37:10.047599  687309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-312944
	I1207 23:37:10.066677  687309 ssh_runner.go:195] Run: cat /version.json
	I1207 23:37:10.066749  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:10.066759  687309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:37:10.066839  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:10.086685  687309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:37:10.087600  687309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:37:10.253926  687309 ssh_runner.go:195] Run: systemctl --version
	I1207 23:37:10.261628  687309 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:37:10.303664  687309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:37:10.309275  687309 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:37:10.309439  687309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:37:10.319421  687309 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 23:37:10.319452  687309 start.go:496] detecting cgroup driver to use...
	I1207 23:37:10.319490  687309 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:37:10.319538  687309 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:37:10.337147  687309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:37:10.354063  687309 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:37:10.354131  687309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:37:10.371992  687309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:37:10.389168  687309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:37:10.495834  687309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:37:10.599210  687309 docker.go:234] disabling docker service ...
	I1207 23:37:10.599293  687309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:37:10.617012  687309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:37:10.632804  687309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:37:10.734587  687309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:37:10.824989  687309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:37:10.840644  687309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:37:10.856737  687309 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:37:10.856811  687309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:10.866390  687309 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:37:10.866468  687309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:10.875807  687309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:10.885215  687309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:10.895379  687309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:37:10.904010  687309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:10.914008  687309 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:10.923534  687309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:10.932953  687309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:37:10.940895  687309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:37:10.948481  687309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:37:11.033722  687309 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:37:11.170187  687309 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:37:11.170272  687309 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:37:11.174951  687309 start.go:564] Will wait 60s for crictl version
	I1207 23:37:11.175003  687309 ssh_runner.go:195] Run: which crictl
	I1207 23:37:11.179346  687309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:37:11.211932  687309 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:37:11.212027  687309 ssh_runner.go:195] Run: crio --version
	I1207 23:37:11.243710  687309 ssh_runner.go:195] Run: crio --version
	I1207 23:37:11.274750  687309 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1207 23:37:11.276028  687309 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-312944 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:37:11.295763  687309 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1207 23:37:11.300888  687309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:37:11.313373  687309 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-312944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-312944 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:37:11.313543  687309 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:37:11.313601  687309 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:37:11.348665  687309 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:37:11.348694  687309 crio.go:433] Images already preloaded, skipping extraction
	I1207 23:37:11.348753  687309 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:37:11.374438  687309 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:37:11.374462  687309 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:37:11.374470  687309 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.2 crio true true} ...
	I1207 23:37:11.374587  687309 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-312944 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-312944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:37:11.374665  687309 ssh_runner.go:195] Run: crio config
	I1207 23:37:11.422172  687309 cni.go:84] Creating CNI manager for ""
	I1207 23:37:11.422195  687309 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:37:11.422219  687309 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 23:37:11.422239  687309 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-312944 NodeName:default-k8s-diff-port-312944 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:37:11.422411  687309 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-312944"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:37:11.422493  687309 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:37:11.431321  687309 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:37:11.431425  687309 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 23:37:11.439544  687309 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1207 23:37:11.452861  687309 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:37:11.466957  687309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1207 23:37:11.480742  687309 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1207 23:37:11.485173  687309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:37:11.495563  687309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:37:11.581098  687309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:37:11.606983  687309 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944 for IP: 192.168.94.2
	I1207 23:37:11.607006  687309 certs.go:195] generating shared ca certs ...
	I1207 23:37:11.607065  687309 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:11.607229  687309 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:37:11.607291  687309 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:37:11.607307  687309 certs.go:257] generating profile certs ...
	I1207 23:37:11.607441  687309 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/client.key
	I1207 23:37:11.607528  687309 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.key.025605fa
	I1207 23:37:11.607598  687309 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/proxy-client.key
	I1207 23:37:11.607714  687309 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:37:11.607747  687309 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:37:11.607757  687309 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:37:11.607787  687309 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:37:11.607811  687309 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:37:11.607833  687309 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:37:11.607902  687309 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:37:11.608582  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:37:11.629973  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:37:11.650005  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:37:11.671965  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:37:11.702669  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1207 23:37:11.724166  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 23:37:11.750220  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:37:11.769521  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 23:37:11.787613  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:37:11.805628  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:37:11.826834  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:37:11.845813  687309 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:37:11.861618  687309 ssh_runner.go:195] Run: openssl version
	I1207 23:37:11.868699  687309 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:11.877649  687309 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:37:11.886218  687309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:11.890549  687309 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:11.890608  687309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:11.938894  687309 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:37:11.950819  687309 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:37:11.962180  687309 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:37:11.972501  687309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:37:11.976373  687309 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:37:11.976428  687309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:37:12.012377  687309 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:37:12.021807  687309 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:37:12.031638  687309 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:37:12.041611  687309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:37:12.046159  687309 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:37:12.046230  687309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:37:12.101484  687309 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:37:12.111349  687309 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:37:12.117036  687309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 23:37:12.164285  687309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 23:37:12.217271  687309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 23:37:12.271890  687309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 23:37:12.317028  687309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 23:37:12.354360  687309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 23:37:12.393769  687309 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-312944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-312944 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:37:12.393881  687309 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:37:12.393943  687309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:37:12.428893  687309 cri.go:89] found id: "362b83f015210f03925637b1b0598b825d674607d060c054cf459ff6794854a5"
	I1207 23:37:12.428918  687309 cri.go:89] found id: "fa639c7294ee1af933ce6c68db15470c1c2d5d2c404c5e0568eaac61e7ede373"
	I1207 23:37:12.428924  687309 cri.go:89] found id: "b04410a9187c7167576fa7f9cb5bf5a761981c61b37ea3b68eb353c721baab8f"
	I1207 23:37:12.428935  687309 cri.go:89] found id: "f27c08f4d2ee8d8898a367bb16db44c1f22130d15e95d71881aa776e8567269c"
	I1207 23:37:12.428939  687309 cri.go:89] found id: ""
	I1207 23:37:12.428990  687309 ssh_runner.go:195] Run: sudo runc list -f json
	W1207 23:37:12.441736  687309 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:37:12Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:37:12.441834  687309 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:37:12.450163  687309 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1207 23:37:12.450191  687309 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1207 23:37:12.450250  687309 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 23:37:12.459014  687309 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:37:12.460119  687309 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-312944" does not appear in /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:37:12.460912  687309 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-389542/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-312944" cluster setting kubeconfig missing "default-k8s-diff-port-312944" context setting]
	I1207 23:37:12.461997  687309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:12.464184  687309 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 23:37:12.473882  687309 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1207 23:37:12.473916  687309 kubeadm.go:602] duration metric: took 23.717856ms to restartPrimaryControlPlane
	I1207 23:37:12.473927  687309 kubeadm.go:403] duration metric: took 80.176844ms to StartCluster
	I1207 23:37:12.473946  687309 settings.go:142] acquiring lock: {Name:mk372e79badb9c8f25216fa891cff6dfa96ea2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:12.474025  687309 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:37:12.475543  687309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:12.475799  687309 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:37:12.475875  687309 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1207 23:37:12.475986  687309 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-312944"
	I1207 23:37:12.476013  687309 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-312944"
	W1207 23:37:12.476025  687309 addons.go:248] addon storage-provisioner should already be in state true
	I1207 23:37:12.476033  687309 config.go:182] Loaded profile config "default-k8s-diff-port-312944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:37:12.476036  687309 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-312944"
	I1207 23:37:12.476054  687309 host.go:66] Checking if "default-k8s-diff-port-312944" exists ...
	I1207 23:37:12.476060  687309 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-312944"
	W1207 23:37:12.476072  687309 addons.go:248] addon dashboard should already be in state true
	I1207 23:37:12.476109  687309 host.go:66] Checking if "default-k8s-diff-port-312944" exists ...
	I1207 23:37:12.476036  687309 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-312944"
	I1207 23:37:12.476163  687309 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-312944"
	I1207 23:37:12.476455  687309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-312944 --format={{.State.Status}}
	I1207 23:37:12.476584  687309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-312944 --format={{.State.Status}}
	I1207 23:37:12.476605  687309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-312944 --format={{.State.Status}}
	I1207 23:37:12.478079  687309 out.go:179] * Verifying Kubernetes components...
	I1207 23:37:12.479378  687309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:37:12.505087  687309 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 23:37:12.506133  687309 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1207 23:37:12.506162  687309 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:37:12.506423  687309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 23:37:12.506502  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:12.508618  687309 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1207 23:37:12.682029  684670 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1207 23:37:12.682124  684670 kubeadm.go:319] [preflight] Running pre-flight checks
	I1207 23:37:12.682251  684670 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1207 23:37:12.682398  684670 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1207 23:37:12.682468  684670 kubeadm.go:319] OS: Linux
	I1207 23:37:12.682540  684670 kubeadm.go:319] CGROUPS_CPU: enabled
	I1207 23:37:12.682599  684670 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1207 23:37:12.682666  684670 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1207 23:37:12.682724  684670 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1207 23:37:12.682792  684670 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1207 23:37:12.682865  684670 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1207 23:37:12.682936  684670 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1207 23:37:12.683014  684670 kubeadm.go:319] CGROUPS_IO: enabled
	I1207 23:37:12.683127  684670 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 23:37:12.683256  684670 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 23:37:12.683423  684670 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1207 23:37:12.683543  684670 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 23:37:12.685463  684670 out.go:252]   - Generating certificates and keys ...
	I1207 23:37:12.685567  684670 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1207 23:37:12.685660  684670 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1207 23:37:12.685748  684670 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 23:37:12.685829  684670 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1207 23:37:12.685908  684670 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1207 23:37:12.685975  684670 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1207 23:37:12.686046  684670 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1207 23:37:12.686198  684670 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-600852 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1207 23:37:12.686272  684670 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1207 23:37:12.686446  684670 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-600852 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1207 23:37:12.686530  684670 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 23:37:12.686614  684670 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 23:37:12.686673  684670 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1207 23:37:12.686741  684670 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 23:37:12.687027  684670 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 23:37:12.687117  684670 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 23:37:12.687184  684670 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 23:37:12.687261  684670 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 23:37:12.687433  684670 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 23:37:12.687586  684670 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 23:37:12.687693  684670 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 23:37:12.689051  684670 out.go:252]   - Booting up control plane ...
	I1207 23:37:12.689204  684670 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 23:37:12.689307  684670 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 23:37:12.689668  684670 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 23:37:12.690053  684670 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 23:37:12.690319  684670 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1207 23:37:12.690569  684670 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1207 23:37:12.690695  684670 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 23:37:12.690775  684670 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1207 23:37:12.691018  684670 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1207 23:37:12.691172  684670 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1207 23:37:12.691248  684670 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.382697ms
	I1207 23:37:12.691379  684670 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1207 23:37:12.691479  684670 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1207 23:37:12.691595  684670 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1207 23:37:12.691690  684670 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1207 23:37:12.691789  684670 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.056198801s
	I1207 23:37:12.691873  684670 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.432500697s
	I1207 23:37:12.691967  684670 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502161176s
	I1207 23:37:12.692101  684670 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 23:37:12.692255  684670 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 23:37:12.692335  684670 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 23:37:12.692587  684670 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-600852 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 23:37:12.692669  684670 kubeadm.go:319] [bootstrap-token] Using token: kh1i16.e5yldh6cwcmarzt4
	I1207 23:37:12.695079  684670 out.go:252]   - Configuring RBAC rules ...
	I1207 23:37:12.695222  684670 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 23:37:12.695352  684670 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 23:37:12.695537  684670 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 23:37:12.695698  684670 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 23:37:12.695841  684670 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 23:37:12.695947  684670 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 23:37:12.696107  684670 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 23:37:12.696169  684670 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1207 23:37:12.696231  684670 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1207 23:37:12.696250  684670 kubeadm.go:319] 
	I1207 23:37:12.696319  684670 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1207 23:37:12.696339  684670 kubeadm.go:319] 
	I1207 23:37:12.696431  684670 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1207 23:37:12.696442  684670 kubeadm.go:319] 
	I1207 23:37:12.696474  684670 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1207 23:37:12.696556  684670 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 23:37:12.696621  684670 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 23:37:12.696631  684670 kubeadm.go:319] 
	I1207 23:37:12.696693  684670 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1207 23:37:12.696702  684670 kubeadm.go:319] 
	I1207 23:37:12.696760  684670 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 23:37:12.696770  684670 kubeadm.go:319] 
	I1207 23:37:12.696833  684670 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1207 23:37:12.696926  684670 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 23:37:12.697022  684670 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 23:37:12.697031  684670 kubeadm.go:319] 
	I1207 23:37:12.697125  684670 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 23:37:12.697236  684670 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1207 23:37:12.697248  684670 kubeadm.go:319] 
	I1207 23:37:12.697407  684670 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kh1i16.e5yldh6cwcmarzt4 \
	I1207 23:37:12.697570  684670 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a6f9ffe32c21ad638ebba2743e15f014ccba55b6baef971adb92cbf8edf27a49 \
	I1207 23:37:12.697597  684670 kubeadm.go:319] 	--control-plane 
	I1207 23:37:12.697602  684670 kubeadm.go:319] 
	I1207 23:37:12.697708  684670 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1207 23:37:12.697714  684670 kubeadm.go:319] 
	I1207 23:37:12.697823  684670 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kh1i16.e5yldh6cwcmarzt4 \
	I1207 23:37:12.697972  684670 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a6f9ffe32c21ad638ebba2743e15f014ccba55b6baef971adb92cbf8edf27a49 
	I1207 23:37:12.697988  684670 cni.go:84] Creating CNI manager for "kindnet"
	I1207 23:37:12.699540  684670 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1207 23:37:10.057107  673247 pod_ready.go:104] pod "coredns-66bc5c9577-wvgqf" is not "Ready", error: <nil>
	I1207 23:37:12.057979  673247 pod_ready.go:94] pod "coredns-66bc5c9577-wvgqf" is "Ready"
	I1207 23:37:12.058008  673247 pod_ready.go:86] duration metric: took 41.006545623s for pod "coredns-66bc5c9577-wvgqf" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:12.065711  673247 pod_ready.go:83] waiting for pod "etcd-embed-certs-654118" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:12.077874  673247 pod_ready.go:94] pod "etcd-embed-certs-654118" is "Ready"
	I1207 23:37:12.077910  673247 pod_ready.go:86] duration metric: took 12.105816ms for pod "etcd-embed-certs-654118" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:12.080983  673247 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-654118" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:12.086764  673247 pod_ready.go:94] pod "kube-apiserver-embed-certs-654118" is "Ready"
	I1207 23:37:12.086795  673247 pod_ready.go:86] duration metric: took 5.779168ms for pod "kube-apiserver-embed-certs-654118" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:12.089056  673247 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-654118" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:12.255583  673247 pod_ready.go:94] pod "kube-controller-manager-embed-certs-654118" is "Ready"
	I1207 23:37:12.255617  673247 pod_ready.go:86] duration metric: took 166.534029ms for pod "kube-controller-manager-embed-certs-654118" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:12.456117  673247 pod_ready.go:83] waiting for pod "kube-proxy-l75b2" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:12.855734  673247 pod_ready.go:94] pod "kube-proxy-l75b2" is "Ready"
	I1207 23:37:12.855768  673247 pod_ready.go:86] duration metric: took 399.618817ms for pod "kube-proxy-l75b2" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:13.055683  673247 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-654118" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:13.455124  673247 pod_ready.go:94] pod "kube-scheduler-embed-certs-654118" is "Ready"
	I1207 23:37:13.455158  673247 pod_ready.go:86] duration metric: took 399.446873ms for pod "kube-scheduler-embed-certs-654118" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:13.455174  673247 pod_ready.go:40] duration metric: took 42.409128438s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:37:13.511191  673247 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1207 23:37:13.515463  673247 out.go:179] * Done! kubectl is now configured to use "embed-certs-654118" cluster and "default" namespace by default
	I1207 23:37:12.510784  687309 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-312944"
	W1207 23:37:12.510809  687309 addons.go:248] addon default-storageclass should already be in state true
	I1207 23:37:12.510842  687309 host.go:66] Checking if "default-k8s-diff-port-312944" exists ...
	I1207 23:37:12.511320  687309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-312944 --format={{.State.Status}}
	I1207 23:37:12.524472  687309 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1207 23:37:12.524510  687309 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1207 23:37:12.524594  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:12.544266  687309 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 23:37:12.544296  687309 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 23:37:12.544395  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:12.549413  687309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:37:12.556644  687309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:37:12.571882  687309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:37:12.637225  687309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:37:12.651683  687309 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-312944" to be "Ready" ...
	I1207 23:37:12.666169  687309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:37:12.668149  687309 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1207 23:37:12.668178  687309 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1207 23:37:12.684512  687309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 23:37:12.689736  687309 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1207 23:37:12.689808  687309 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1207 23:37:12.711585  687309 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1207 23:37:12.711639  687309 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1207 23:37:12.738126  687309 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1207 23:37:12.738153  687309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1207 23:37:12.756490  687309 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1207 23:37:12.756517  687309 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1207 23:37:12.775895  687309 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1207 23:37:12.775923  687309 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1207 23:37:12.794028  687309 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1207 23:37:12.794094  687309 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1207 23:37:12.810198  687309 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1207 23:37:12.810228  687309 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1207 23:37:12.830550  687309 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1207 23:37:12.830580  687309 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1207 23:37:12.845228  687309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1207 23:37:14.326249  687309 node_ready.go:49] node "default-k8s-diff-port-312944" is "Ready"
	I1207 23:37:14.326294  687309 node_ready.go:38] duration metric: took 1.674580102s for node "default-k8s-diff-port-312944" to be "Ready" ...
	I1207 23:37:14.326312  687309 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:37:14.326451  687309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:37:14.982123  687309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.315915331s)
	I1207 23:37:14.982200  687309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.297655548s)
	I1207 23:37:14.982463  687309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.137172371s)
	I1207 23:37:14.982514  687309 api_server.go:72] duration metric: took 2.506683292s to wait for apiserver process to appear ...
	I1207 23:37:14.982530  687309 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:37:14.982554  687309 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1207 23:37:14.985404  687309 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-312944 addons enable metrics-server
	
	I1207 23:37:14.988142  687309 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1207 23:37:14.988171  687309 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1207 23:37:14.992276  687309 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1207 23:37:12.701068  684670 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1207 23:37:12.707503  684670 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1207 23:37:12.707530  684670 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1207 23:37:12.732102  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1207 23:37:13.011122  684670 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 23:37:13.011195  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:13.011199  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-600852 minikube.k8s.io/updated_at=2025_12_07T23_37_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47 minikube.k8s.io/name=kindnet-600852 minikube.k8s.io/primary=true
	I1207 23:37:13.023647  684670 ops.go:34] apiserver oom_adj: -16
	I1207 23:37:13.094742  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:13.595442  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:14.095554  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:14.595229  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:15.095535  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:15.595011  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:16.095093  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:16.594870  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:17.094842  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:17.166708  684670 kubeadm.go:1114] duration metric: took 4.155583217s to wait for elevateKubeSystemPrivileges
	I1207 23:37:17.166756  684670 kubeadm.go:403] duration metric: took 16.092913541s to StartCluster
	I1207 23:37:17.166778  684670 settings.go:142] acquiring lock: {Name:mk372e79badb9c8f25216fa891cff6dfa96ea2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:17.166846  684670 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:37:17.168859  684670 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:17.169139  684670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 23:37:17.169148  684670 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:37:17.169221  684670 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1207 23:37:17.169336  684670 addons.go:70] Setting storage-provisioner=true in profile "kindnet-600852"
	I1207 23:37:17.169359  684670 addons.go:239] Setting addon storage-provisioner=true in "kindnet-600852"
	I1207 23:37:17.169379  684670 addons.go:70] Setting default-storageclass=true in profile "kindnet-600852"
	I1207 23:37:17.169399  684670 config.go:182] Loaded profile config "kindnet-600852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:37:17.169411  684670 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-600852"
	I1207 23:37:17.169395  684670 host.go:66] Checking if "kindnet-600852" exists ...
	I1207 23:37:17.169855  684670 cli_runner.go:164] Run: docker container inspect kindnet-600852 --format={{.State.Status}}
	I1207 23:37:17.170020  684670 cli_runner.go:164] Run: docker container inspect kindnet-600852 --format={{.State.Status}}
	I1207 23:37:17.170716  684670 out.go:179] * Verifying Kubernetes components...
	I1207 23:37:17.172437  684670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:37:17.194276  684670 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 23:37:17.195154  684670 addons.go:239] Setting addon default-storageclass=true in "kindnet-600852"
	I1207 23:37:17.195205  684670 host.go:66] Checking if "kindnet-600852" exists ...
	I1207 23:37:17.195707  684670 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:37:17.195729  684670 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 23:37:17.195779  684670 cli_runner.go:164] Run: docker container inspect kindnet-600852 --format={{.State.Status}}
	I1207 23:37:17.195790  684670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-600852
	I1207 23:37:17.227629  684670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/kindnet-600852/id_rsa Username:docker}
	I1207 23:37:17.230378  684670 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 23:37:17.230611  684670 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 23:37:17.230707  684670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-600852
	I1207 23:37:17.268458  684670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/kindnet-600852/id_rsa Username:docker}
	I1207 23:37:17.282470  684670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 23:37:17.330457  684670 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:37:17.348754  684670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:37:17.382906  684670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 23:37:17.451617  684670 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1207 23:37:17.453385  684670 node_ready.go:35] waiting up to 15m0s for node "kindnet-600852" to be "Ready" ...
	I1207 23:37:17.668581  684670 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1207 23:37:14.995259  687309 addons.go:530] duration metric: took 2.51938942s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1207 23:37:15.483502  687309 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1207 23:37:15.488380  687309 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1207 23:37:15.488409  687309 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1207 23:37:15.982675  687309 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1207 23:37:15.989607  687309 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1207 23:37:15.990620  687309 api_server.go:141] control plane version: v1.34.2
	I1207 23:37:15.990646  687309 api_server.go:131] duration metric: took 1.008108817s to wait for apiserver health ...
	I1207 23:37:15.990655  687309 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:37:15.994192  687309 system_pods.go:59] 8 kube-system pods found
	I1207 23:37:15.994244  687309 system_pods.go:61] "coredns-66bc5c9577-p4v2f" [113d6978-708b-4941-acbc-0fa4a639f318] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:37:15.994259  687309 system_pods.go:61] "etcd-default-k8s-diff-port-312944" [569e31ea-e77d-4156-a9f2-0970afca17bd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:37:15.994270  687309 system_pods.go:61] "kindnet-55xbl" [627ffd8d-a2eb-4d9c-b1bc-a71f609273bc] Running
	I1207 23:37:15.994291  687309 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-312944" [a2d3f5cd-a118-448c-a233-a6fe616b5b6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:37:15.994305  687309 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-312944" [b5eaf61f-ba8d-4d44-8f2c-eb9ebae5e285] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:37:15.994312  687309 system_pods.go:61] "kube-proxy-7stg5" [b7e00d0a-bd16-45c1-a58c-e0569a0bcb33] Running
	I1207 23:37:15.994335  687309 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-312944" [ddd21134-7272-4134-8cc5-5fd8abb6abf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:37:15.994340  687309 system_pods.go:61] "storage-provisioner" [adffbdc2-708d-4f45-9f91-1697332156e3] Running
	I1207 23:37:15.994349  687309 system_pods.go:74] duration metric: took 3.6871ms to wait for pod list to return data ...
	I1207 23:37:15.994359  687309 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:37:15.996719  687309 default_sa.go:45] found service account: "default"
	I1207 23:37:15.996740  687309 default_sa.go:55] duration metric: took 2.371119ms for default service account to be created ...
	I1207 23:37:15.996750  687309 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 23:37:15.999790  687309 system_pods.go:86] 8 kube-system pods found
	I1207 23:37:15.999816  687309 system_pods.go:89] "coredns-66bc5c9577-p4v2f" [113d6978-708b-4941-acbc-0fa4a639f318] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:37:15.999824  687309 system_pods.go:89] "etcd-default-k8s-diff-port-312944" [569e31ea-e77d-4156-a9f2-0970afca17bd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:37:15.999831  687309 system_pods.go:89] "kindnet-55xbl" [627ffd8d-a2eb-4d9c-b1bc-a71f609273bc] Running
	I1207 23:37:15.999839  687309 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-312944" [a2d3f5cd-a118-448c-a233-a6fe616b5b6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:37:15.999852  687309 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-312944" [b5eaf61f-ba8d-4d44-8f2c-eb9ebae5e285] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:37:15.999881  687309 system_pods.go:89] "kube-proxy-7stg5" [b7e00d0a-bd16-45c1-a58c-e0569a0bcb33] Running
	I1207 23:37:15.999889  687309 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-312944" [ddd21134-7272-4134-8cc5-5fd8abb6abf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:37:15.999895  687309 system_pods.go:89] "storage-provisioner" [adffbdc2-708d-4f45-9f91-1697332156e3] Running
	I1207 23:37:15.999903  687309 system_pods.go:126] duration metric: took 3.146331ms to wait for k8s-apps to be running ...
	I1207 23:37:15.999911  687309 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:37:15.999966  687309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:37:16.014472  687309 system_svc.go:56] duration metric: took 14.550113ms WaitForService to wait for kubelet
	I1207 23:37:16.014510  687309 kubeadm.go:587] duration metric: took 3.538682419s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:37:16.014536  687309 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:37:16.017949  687309 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:37:16.017980  687309 node_conditions.go:123] node cpu capacity is 8
	I1207 23:37:16.017996  687309 node_conditions.go:105] duration metric: took 3.454545ms to run NodePressure ...
	I1207 23:37:16.018012  687309 start.go:242] waiting for startup goroutines ...
	I1207 23:37:16.018019  687309 start.go:247] waiting for cluster config update ...
	I1207 23:37:16.018030  687309 start.go:256] writing updated cluster config ...
	I1207 23:37:16.018338  687309 ssh_runner.go:195] Run: rm -f paused
	I1207 23:37:16.022608  687309 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:37:16.026747  687309 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p4v2f" in "kube-system" namespace to be "Ready" or be gone ...
	W1207 23:37:18.033653  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	I1207 23:37:17.669990  684670 addons.go:530] duration metric: took 500.771902ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1207 23:37:17.955540  684670 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-600852" context rescaled to 1 replicas
	W1207 23:37:19.457758  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:21.958282  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:20.532841  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	W1207 23:37:22.534300  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	W1207 23:37:24.536204  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	W1207 23:37:24.458362  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:26.958204  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 07 23:37:00 embed-certs-654118 crio[570]: time="2025-12-07T23:37:00.027155667Z" level=info msg="Started container" PID=1754 containerID=875b7b94a37e52c746df5e05f215dfa5f1c92f794887cacf6865c3d4f41b062e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p/dashboard-metrics-scraper id=03b20226-72b5-4847-a476-debd2dbfc4cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=576929cd3297cf2a4ffc1b4dc1da0f6e5fa38c66dc9f1bcdc87a647aafdad827
	Dec 07 23:37:00 embed-certs-654118 crio[570]: time="2025-12-07T23:37:00.104320785Z" level=info msg="Removing container: e9239524be180388617e185be0ee87ddf1fcc6fd9e306ae47ab9c54b693d8f2c" id=c6642047-883b-4acc-8aee-1bd6de796b2b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 07 23:37:00 embed-certs-654118 crio[570]: time="2025-12-07T23:37:00.115349724Z" level=info msg="Removed container e9239524be180388617e185be0ee87ddf1fcc6fd9e306ae47ab9c54b693d8f2c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p/dashboard-metrics-scraper" id=c6642047-883b-4acc-8aee-1bd6de796b2b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 07 23:37:01 embed-certs-654118 crio[570]: time="2025-12-07T23:37:01.109266184Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=421244e0-aa0b-420d-92d9-5d8e2be81334 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:37:01 embed-certs-654118 crio[570]: time="2025-12-07T23:37:01.110262336Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=562f11b8-6659-4ebb-ae75-2e7be4899127 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:37:01 embed-certs-654118 crio[570]: time="2025-12-07T23:37:01.111649962Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a0a04da3-4b11-41a8-93e4-ec41a03ea548 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:37:01 embed-certs-654118 crio[570]: time="2025-12-07T23:37:01.111759387Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:01 embed-certs-654118 crio[570]: time="2025-12-07T23:37:01.116797447Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:01 embed-certs-654118 crio[570]: time="2025-12-07T23:37:01.1169782Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b42eac09ef6bc6ccd7ea8acb48090d220e35ffa106b8cf78e81b08cc564cf2f0/merged/etc/passwd: no such file or directory"
	Dec 07 23:37:01 embed-certs-654118 crio[570]: time="2025-12-07T23:37:01.117006969Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b42eac09ef6bc6ccd7ea8acb48090d220e35ffa106b8cf78e81b08cc564cf2f0/merged/etc/group: no such file or directory"
	Dec 07 23:37:01 embed-certs-654118 crio[570]: time="2025-12-07T23:37:01.11727408Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:01 embed-certs-654118 crio[570]: time="2025-12-07T23:37:01.148407032Z" level=info msg="Created container a230f8e09c8a793d24bc930a0fb7c9e8f555725f765382beb79ac8621a4e3455: kube-system/storage-provisioner/storage-provisioner" id=a0a04da3-4b11-41a8-93e4-ec41a03ea548 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:37:01 embed-certs-654118 crio[570]: time="2025-12-07T23:37:01.149175543Z" level=info msg="Starting container: a230f8e09c8a793d24bc930a0fb7c9e8f555725f765382beb79ac8621a4e3455" id=fee8b1fd-66b5-402e-9568-e73d843bb268 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:37:01 embed-certs-654118 crio[570]: time="2025-12-07T23:37:01.151125322Z" level=info msg="Started container" PID=1768 containerID=a230f8e09c8a793d24bc930a0fb7c9e8f555725f765382beb79ac8621a4e3455 description=kube-system/storage-provisioner/storage-provisioner id=fee8b1fd-66b5-402e-9568-e73d843bb268 name=/runtime.v1.RuntimeService/StartContainer sandboxID=184b49863aff7bb406732f03e8802327a73dc6bf00293d761e2bf93f05834919
	Dec 07 23:37:22 embed-certs-654118 crio[570]: time="2025-12-07T23:37:22.985317073Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=479742d7-a01f-4479-bfef-ebdccd5082f3 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:37:22 embed-certs-654118 crio[570]: time="2025-12-07T23:37:22.986931011Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8d9ec9a6-b86c-4f9e-8f76-0a5060d160c1 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:37:22 embed-certs-654118 crio[570]: time="2025-12-07T23:37:22.988778654Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p/dashboard-metrics-scraper" id=dbead2cc-92c2-499e-ad26-954fe1c7735b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:37:22 embed-certs-654118 crio[570]: time="2025-12-07T23:37:22.98917598Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:22 embed-certs-654118 crio[570]: time="2025-12-07T23:37:22.997120395Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:22 embed-certs-654118 crio[570]: time="2025-12-07T23:37:22.997899471Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:23 embed-certs-654118 crio[570]: time="2025-12-07T23:37:23.038403248Z" level=info msg="Created container 977e8fafdf74218cf51fae0fe63b18398a1e392fd9aca04d48a77e94825c5eb1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p/dashboard-metrics-scraper" id=dbead2cc-92c2-499e-ad26-954fe1c7735b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:37:23 embed-certs-654118 crio[570]: time="2025-12-07T23:37:23.039162864Z" level=info msg="Starting container: 977e8fafdf74218cf51fae0fe63b18398a1e392fd9aca04d48a77e94825c5eb1" id=792f272d-fa3b-4292-b2d4-0f2f3d03bbdb name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:37:23 embed-certs-654118 crio[570]: time="2025-12-07T23:37:23.041654616Z" level=info msg="Started container" PID=1806 containerID=977e8fafdf74218cf51fae0fe63b18398a1e392fd9aca04d48a77e94825c5eb1 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p/dashboard-metrics-scraper id=792f272d-fa3b-4292-b2d4-0f2f3d03bbdb name=/runtime.v1.RuntimeService/StartContainer sandboxID=576929cd3297cf2a4ffc1b4dc1da0f6e5fa38c66dc9f1bcdc87a647aafdad827
	Dec 07 23:37:23 embed-certs-654118 crio[570]: time="2025-12-07T23:37:23.176471408Z" level=info msg="Removing container: 875b7b94a37e52c746df5e05f215dfa5f1c92f794887cacf6865c3d4f41b062e" id=f4d4cb02-bf86-4649-a32d-3e9b2c87dd39 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 07 23:37:23 embed-certs-654118 crio[570]: time="2025-12-07T23:37:23.189438701Z" level=info msg="Removed container 875b7b94a37e52c746df5e05f215dfa5f1c92f794887cacf6865c3d4f41b062e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p/dashboard-metrics-scraper" id=f4d4cb02-bf86-4649-a32d-3e9b2c87dd39 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	977e8fafdf742       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago        Exited              dashboard-metrics-scraper   3                   576929cd3297c       dashboard-metrics-scraper-6ffb444bf9-s2g7p   kubernetes-dashboard
	a230f8e09c8a7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           27 seconds ago       Running             storage-provisioner         1                   184b49863aff7       storage-provisioner                          kube-system
	fbf4535fa2929       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   50 seconds ago       Running             kubernetes-dashboard        0                   d6a1266848dba       kubernetes-dashboard-855c9754f9-8dl4x        kubernetes-dashboard
	a6c98c6dc2249       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           58 seconds ago       Running             coredns                     0                   399b2d963739d       coredns-66bc5c9577-wvgqf                     kube-system
	0f1dc0c7f1b35       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   78bdd627934b3       busybox                                      default
	fa59387c3b4d4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         0                   184b49863aff7       storage-provisioner                          kube-system
	64270ee075317       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           58 seconds ago       Running             kindnet-cni                 0                   b11d7bd3b9609       kindnet-68q87                                kube-system
	9e595ec0ec0a2       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           58 seconds ago       Running             kube-proxy                  0                   8de12bd876fcf       kube-proxy-l75b2                             kube-system
	55f614a7d8907       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           About a minute ago   Running             etcd                        0                   be9bd961329a8       etcd-embed-certs-654118                      kube-system
	de2a8fefd0407       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           About a minute ago   Running             kube-apiserver              0                   acf48a297b1a1       kube-apiserver-embed-certs-654118            kube-system
	63dcc5abcffa7       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           About a minute ago   Running             kube-scheduler              0                   b89c4f989e484       kube-scheduler-embed-certs-654118            kube-system
	1c04ccfa6ad08       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           About a minute ago   Running             kube-controller-manager     0                   58c086861a477       kube-controller-manager-embed-certs-654118   kube-system
	
	
	==> coredns [a6c98c6dc2249ec043cc985ad99b2be276e7fb077b56a646b774572f9b0e43e9] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55677 - 27609 "HINFO IN 7821679087082351473.2883090864246873011. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021622619s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-654118
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-654118
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=embed-certs-654118
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_34_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:34:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-654118
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:37:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:37:20 +0000   Sun, 07 Dec 2025 23:34:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:37:20 +0000   Sun, 07 Dec 2025 23:34:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:37:20 +0000   Sun, 07 Dec 2025 23:34:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:37:20 +0000   Sun, 07 Dec 2025 23:35:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-654118
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                03c8ca8e-58f6-4b1a-acac-362ecdda585b
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-66bc5c9577-wvgqf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m24s
	  kube-system                 etcd-embed-certs-654118                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m30s
	  kube-system                 kindnet-68q87                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m24s
	  kube-system                 kube-apiserver-embed-certs-654118             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-controller-manager-embed-certs-654118    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-proxy-l75b2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-scheduler-embed-certs-654118             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-s2g7p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8dl4x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m23s                  kube-proxy       
	  Normal  Starting                 58s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m34s (x8 over 2m34s)  kubelet          Node embed-certs-654118 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m34s (x8 over 2m34s)  kubelet          Node embed-certs-654118 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m34s (x8 over 2m34s)  kubelet          Node embed-certs-654118 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m30s                  kubelet          Node embed-certs-654118 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m30s                  kubelet          Node embed-certs-654118 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m30s                  kubelet          Node embed-certs-654118 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m30s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m25s                  node-controller  Node embed-certs-654118 event: Registered Node embed-certs-654118 in Controller
	  Normal  NodeReady                103s                   kubelet          Node embed-certs-654118 status is now: NodeReady
	  Normal  Starting                 62s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 62s)      kubelet          Node embed-certs-654118 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 62s)      kubelet          Node embed-certs-654118 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 62s)      kubelet          Node embed-certs-654118 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s                    node-controller  Node embed-certs-654118 event: Registered Node embed-certs-654118 in Controller
	
	
	==> dmesg <==
	[  +0.006319] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.495443] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006323] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494714] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006745] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494455] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007157] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493953] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007413] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493695] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007143] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493798] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007702] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493076] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008458] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493060] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008891] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492811] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007996] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493243] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008588] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492559] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008931] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.491699] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.010378] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	
	
	==> etcd [55f614a7d89079ce6b0150051faf8399dea9fe3ee0db5301b1f6eb9811f274fb] <==
	{"level":"warn","ts":"2025-12-07T23:36:28.680963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.688222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.696058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.703415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.710892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.718034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.729497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.736446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.743293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.750162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.757381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.764194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.771871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.787348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.795810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.805257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.813631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.834452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.841539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.849819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.905675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:56.262457Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.676561ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790530665223496 > lease_revoke:<id:40899afb2c7a94b0>","response":"size:28"}
	{"level":"info","ts":"2025-12-07T23:36:56.262578Z","caller":"traceutil/trace.go:172","msg":"trace[891339229] linearizableReadLoop","detail":"{readStateIndex:694; appliedIndex:693; }","duration":"126.822942ms","start":"2025-12-07T23:36:56.135740Z","end":"2025-12-07T23:36:56.262563Z","steps":["trace[891339229] 'read index received'  (duration: 39.193µs)","trace[891339229] 'applied index is now lower than readState.Index'  (duration: 126.782684ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-07T23:36:56.262708Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.964796ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-654118\" limit:1 ","response":"range_response_count:1 size:5709"}
	{"level":"info","ts":"2025-12-07T23:36:56.262735Z","caller":"traceutil/trace.go:172","msg":"trace[76058325] range","detail":"{range_begin:/registry/minions/embed-certs-654118; range_end:; response_count:1; response_revision:653; }","duration":"127.001291ms","start":"2025-12-07T23:36:56.135725Z","end":"2025-12-07T23:36:56.262727Z","steps":["trace[76058325] 'agreement among raft nodes before linearized reading'  (duration: 126.876239ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:37:29 up  2:19,  0 user,  load average: 3.50, 2.80, 2.08
	Linux embed-certs-654118 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [64270ee075317594cd8574f52acb74ad205fd052a7c4a7a070e7c82ad1a83c22] <==
	I1207 23:36:30.623225       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 23:36:30.623718       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1207 23:36:30.625641       1 main.go:148] setting mtu 1500 for CNI 
	I1207 23:36:30.625699       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 23:36:30.625770       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T23:36:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 23:36:30.829172       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 23:36:30.891156       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 23:36:30.891187       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 23:36:30.891346       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 23:36:31.119792       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 23:36:31.119848       1 metrics.go:72] Registering metrics
	I1207 23:36:31.120027       1 controller.go:711] "Syncing nftables rules"
	I1207 23:36:40.828946       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1207 23:36:40.829008       1 main.go:301] handling current node
	I1207 23:36:50.834444       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1207 23:36:50.834474       1 main.go:301] handling current node
	I1207 23:37:00.829263       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1207 23:37:00.829301       1 main.go:301] handling current node
	I1207 23:37:10.830407       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1207 23:37:10.830462       1 main.go:301] handling current node
	I1207 23:37:20.829203       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1207 23:37:20.829249       1 main.go:301] handling current node
	
	
	==> kube-apiserver [de2a8fefd04073ed27eff698be1e31a40e77a0d4e91f60687ad522521cb5f30a] <==
	I1207 23:36:29.456955       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1207 23:36:29.460850       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1207 23:36:29.460975       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1207 23:36:29.461375       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1207 23:36:29.461531       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1207 23:36:29.461598       1 aggregator.go:171] initial CRD sync complete...
	I1207 23:36:29.461610       1 autoregister_controller.go:144] Starting autoregister controller
	I1207 23:36:29.461618       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1207 23:36:29.461625       1 cache.go:39] Caches are synced for autoregister controller
	E1207 23:36:29.475602       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1207 23:36:29.481069       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:36:29.505454       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 23:36:29.529462       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1207 23:36:29.780486       1 controller.go:667] quota admission added evaluator for: namespaces
	I1207 23:36:29.814771       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 23:36:29.841798       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 23:36:29.852119       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 23:36:29.860135       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 23:36:29.898472       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.27.191"}
	I1207 23:36:29.915124       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.248.19"}
	I1207 23:36:30.358039       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1207 23:36:32.840057       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 23:36:33.287744       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 23:36:33.437136       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:36:33.437146       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [1c04ccfa6ad08a37efa73abd2f81a78cc8ab1e12cae0f419d99b512bde0a19c0] <==
	I1207 23:36:32.803817       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1207 23:36:32.803821       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1207 23:36:32.804748       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1207 23:36:32.808143       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1207 23:36:32.809355       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1207 23:36:32.811598       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1207 23:36:32.815938       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1207 23:36:32.816112       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1207 23:36:32.816195       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-654118"
	I1207 23:36:32.816259       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1207 23:36:32.834475       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1207 23:36:32.834528       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1207 23:36:32.834539       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1207 23:36:32.834543       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1207 23:36:32.834549       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1207 23:36:32.834556       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1207 23:36:32.834574       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1207 23:36:32.834664       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1207 23:36:32.834735       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1207 23:36:32.834682       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1207 23:36:32.834955       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1207 23:36:32.837253       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1207 23:36:32.839605       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1207 23:36:32.839646       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 23:36:32.865106       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9e595ec0ec0a2a4f455100334da2b7bc91d7b90dbc422aa9f96b4bfcbd14e784] <==
	I1207 23:36:30.466631       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:36:30.533978       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 23:36:30.635080       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 23:36:30.635118       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1207 23:36:30.635848       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:36:30.668546       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:36:30.668611       1 server_linux.go:132] "Using iptables Proxier"
	I1207 23:36:30.676060       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:36:30.678468       1 server.go:527] "Version info" version="v1.34.2"
	I1207 23:36:30.678515       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:36:30.680250       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:36:30.680273       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:36:30.680308       1 config.go:200] "Starting service config controller"
	I1207 23:36:30.680315       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:36:30.680395       1 config.go:309] "Starting node config controller"
	I1207 23:36:30.680407       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:36:30.681486       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:36:30.681553       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:36:30.781397       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 23:36:30.781479       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 23:36:30.781615       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:36:30.781779       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [63dcc5abcffa72045b4ce0dfe82b7bff6403005be06354ce602e9140d0e7be08] <==
	I1207 23:36:28.069720       1 serving.go:386] Generated self-signed cert in-memory
	W1207 23:36:29.409700       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 23:36:29.409748       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 23:36:29.409770       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 23:36:29.409779       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 23:36:29.447793       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1207 23:36:29.447825       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:36:29.450824       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:36:29.450881       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:36:29.451496       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 23:36:29.451579       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 23:36:29.552086       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 07 23:36:33 embed-certs-654118 kubelet[737]: I1207 23:36:33.543427     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e62df48b-0039-460c-a6cc-935084c26cf3-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-s2g7p\" (UID: \"e62df48b-0039-460c-a6cc-935084c26cf3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p"
	Dec 07 23:36:40 embed-certs-654118 kubelet[737]: I1207 23:36:40.460700     737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8dl4x" podStartSLOduration=2.725759245 podStartE2EDuration="7.460657233s" podCreationTimestamp="2025-12-07 23:36:33 +0000 UTC" firstStartedPulling="2025-12-07 23:36:33.738651039 +0000 UTC m=+6.844756216" lastFinishedPulling="2025-12-07 23:36:38.473549016 +0000 UTC m=+11.579654204" observedRunningTime="2025-12-07 23:36:39.053166273 +0000 UTC m=+12.159271469" watchObservedRunningTime="2025-12-07 23:36:40.460657233 +0000 UTC m=+13.566762426"
	Dec 07 23:36:42 embed-certs-654118 kubelet[737]: I1207 23:36:42.048445     737 scope.go:117] "RemoveContainer" containerID="a8def650128b6b0deb078ecef07e4892c67193bc5598fc8adf125c8bbec80e14"
	Dec 07 23:36:43 embed-certs-654118 kubelet[737]: I1207 23:36:43.053603     737 scope.go:117] "RemoveContainer" containerID="a8def650128b6b0deb078ecef07e4892c67193bc5598fc8adf125c8bbec80e14"
	Dec 07 23:36:43 embed-certs-654118 kubelet[737]: I1207 23:36:43.053930     737 scope.go:117] "RemoveContainer" containerID="e9239524be180388617e185be0ee87ddf1fcc6fd9e306ae47ab9c54b693d8f2c"
	Dec 07 23:36:43 embed-certs-654118 kubelet[737]: E1207 23:36:43.054149     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s2g7p_kubernetes-dashboard(e62df48b-0039-460c-a6cc-935084c26cf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p" podUID="e62df48b-0039-460c-a6cc-935084c26cf3"
	Dec 07 23:36:44 embed-certs-654118 kubelet[737]: I1207 23:36:44.058798     737 scope.go:117] "RemoveContainer" containerID="e9239524be180388617e185be0ee87ddf1fcc6fd9e306ae47ab9c54b693d8f2c"
	Dec 07 23:36:44 embed-certs-654118 kubelet[737]: E1207 23:36:44.058996     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s2g7p_kubernetes-dashboard(e62df48b-0039-460c-a6cc-935084c26cf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p" podUID="e62df48b-0039-460c-a6cc-935084c26cf3"
	Dec 07 23:36:47 embed-certs-654118 kubelet[737]: I1207 23:36:47.649780     737 scope.go:117] "RemoveContainer" containerID="e9239524be180388617e185be0ee87ddf1fcc6fd9e306ae47ab9c54b693d8f2c"
	Dec 07 23:36:47 embed-certs-654118 kubelet[737]: E1207 23:36:47.650063     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s2g7p_kubernetes-dashboard(e62df48b-0039-460c-a6cc-935084c26cf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p" podUID="e62df48b-0039-460c-a6cc-935084c26cf3"
	Dec 07 23:36:59 embed-certs-654118 kubelet[737]: I1207 23:36:59.985504     737 scope.go:117] "RemoveContainer" containerID="e9239524be180388617e185be0ee87ddf1fcc6fd9e306ae47ab9c54b693d8f2c"
	Dec 07 23:37:00 embed-certs-654118 kubelet[737]: I1207 23:37:00.102706     737 scope.go:117] "RemoveContainer" containerID="e9239524be180388617e185be0ee87ddf1fcc6fd9e306ae47ab9c54b693d8f2c"
	Dec 07 23:37:00 embed-certs-654118 kubelet[737]: I1207 23:37:00.102986     737 scope.go:117] "RemoveContainer" containerID="875b7b94a37e52c746df5e05f215dfa5f1c92f794887cacf6865c3d4f41b062e"
	Dec 07 23:37:00 embed-certs-654118 kubelet[737]: E1207 23:37:00.103198     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s2g7p_kubernetes-dashboard(e62df48b-0039-460c-a6cc-935084c26cf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p" podUID="e62df48b-0039-460c-a6cc-935084c26cf3"
	Dec 07 23:37:01 embed-certs-654118 kubelet[737]: I1207 23:37:01.108881     737 scope.go:117] "RemoveContainer" containerID="fa59387c3b4d4bfd483cee16a4f633f23a1c3789f8c37f1fa4f4d2b9c9a3ed6a"
	Dec 07 23:37:07 embed-certs-654118 kubelet[737]: I1207 23:37:07.649918     737 scope.go:117] "RemoveContainer" containerID="875b7b94a37e52c746df5e05f215dfa5f1c92f794887cacf6865c3d4f41b062e"
	Dec 07 23:37:07 embed-certs-654118 kubelet[737]: E1207 23:37:07.650159     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s2g7p_kubernetes-dashboard(e62df48b-0039-460c-a6cc-935084c26cf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p" podUID="e62df48b-0039-460c-a6cc-935084c26cf3"
	Dec 07 23:37:22 embed-certs-654118 kubelet[737]: I1207 23:37:22.984801     737 scope.go:117] "RemoveContainer" containerID="875b7b94a37e52c746df5e05f215dfa5f1c92f794887cacf6865c3d4f41b062e"
	Dec 07 23:37:23 embed-certs-654118 kubelet[737]: I1207 23:37:23.173651     737 scope.go:117] "RemoveContainer" containerID="875b7b94a37e52c746df5e05f215dfa5f1c92f794887cacf6865c3d4f41b062e"
	Dec 07 23:37:23 embed-certs-654118 kubelet[737]: I1207 23:37:23.173903     737 scope.go:117] "RemoveContainer" containerID="977e8fafdf74218cf51fae0fe63b18398a1e392fd9aca04d48a77e94825c5eb1"
	Dec 07 23:37:23 embed-certs-654118 kubelet[737]: E1207 23:37:23.174101     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s2g7p_kubernetes-dashboard(e62df48b-0039-460c-a6cc-935084c26cf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p" podUID="e62df48b-0039-460c-a6cc-935084c26cf3"
	Dec 07 23:37:25 embed-certs-654118 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 07 23:37:25 embed-certs-654118 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 07 23:37:25 embed-certs-654118 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 07 23:37:25 embed-certs-654118 systemd[1]: kubelet.service: Consumed 1.967s CPU time.
	
	
	==> kubernetes-dashboard [fbf4535fa292992611e22cc68e13a796e2e4470d6418b306a556048000c2c4a4] <==
	2025/12/07 23:36:38 Starting overwatch
	2025/12/07 23:36:38 Using namespace: kubernetes-dashboard
	2025/12/07 23:36:38 Using in-cluster config to connect to apiserver
	2025/12/07 23:36:38 Using secret token for csrf signing
	2025/12/07 23:36:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/07 23:36:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/07 23:36:38 Successful initial request to the apiserver, version: v1.34.2
	2025/12/07 23:36:38 Generating JWE encryption key
	2025/12/07 23:36:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/07 23:36:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/07 23:36:39 Initializing JWE encryption key from synchronized object
	2025/12/07 23:36:39 Creating in-cluster Sidecar client
	2025/12/07 23:36:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/07 23:36:39 Serving insecurely on HTTP port: 9090
	2025/12/07 23:37:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [a230f8e09c8a793d24bc930a0fb7c9e8f555725f765382beb79ac8621a4e3455] <==
	I1207 23:37:01.172561       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 23:37:01.172598       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1207 23:37:01.175121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:04.631190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:08.895190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:12.497520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:15.551488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:18.573920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:18.578514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 23:37:18.578674       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 23:37:18.578788       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69a0c6a4-6b58-458f-b7fc-bc544f9a2bed", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-654118_eda95e15-9331-4d69-961f-ac0635ce5997 became leader
	I1207 23:37:18.578823       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-654118_eda95e15-9331-4d69-961f-ac0635ce5997!
	W1207 23:37:18.581405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:18.584718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 23:37:18.679612       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-654118_eda95e15-9331-4d69-961f-ac0635ce5997!
	W1207 23:37:20.588454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:20.592594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:22.597618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:22.602457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:24.606433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:24.669381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:26.673083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:26.677014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:28.680559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:28.686803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fa59387c3b4d4bfd483cee16a4f633f23a1c3789f8c37f1fa4f4d2b9c9a3ed6a] <==
	I1207 23:36:30.417191       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1207 23:37:00.421942       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-654118 -n embed-certs-654118
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-654118 -n embed-certs-654118: exit status 2 (360.398696ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-654118 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-654118
helpers_test.go:243: (dbg) docker inspect embed-certs-654118:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c652041fdce083d2960416540159c52a229547c9c1d310673112a81f91cd7e06",
	        "Created": "2025-12-07T23:34:44.331761062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 673801,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:36:20.069725191Z",
	            "FinishedAt": "2025-12-07T23:36:18.346277135Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/c652041fdce083d2960416540159c52a229547c9c1d310673112a81f91cd7e06/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c652041fdce083d2960416540159c52a229547c9c1d310673112a81f91cd7e06/hostname",
	        "HostsPath": "/var/lib/docker/containers/c652041fdce083d2960416540159c52a229547c9c1d310673112a81f91cd7e06/hosts",
	        "LogPath": "/var/lib/docker/containers/c652041fdce083d2960416540159c52a229547c9c1d310673112a81f91cd7e06/c652041fdce083d2960416540159c52a229547c9c1d310673112a81f91cd7e06-json.log",
	        "Name": "/embed-certs-654118",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-654118:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-654118",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c652041fdce083d2960416540159c52a229547c9c1d310673112a81f91cd7e06",
	                "LowerDir": "/var/lib/docker/overlay2/b033e7e02e0290ed765f992d60e4a6dc2240c75ef7b2064b0c47febefaf70b5f-init/diff:/var/lib/docker/overlay2/d2e9c5481c0f5ed3745e4b3c85b207e8e3f273f5a1d285f7bc7bfa20976ad16e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b033e7e02e0290ed765f992d60e4a6dc2240c75ef7b2064b0c47febefaf70b5f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b033e7e02e0290ed765f992d60e4a6dc2240c75ef7b2064b0c47febefaf70b5f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b033e7e02e0290ed765f992d60e4a6dc2240c75ef7b2064b0c47febefaf70b5f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-654118",
	                "Source": "/var/lib/docker/volumes/embed-certs-654118/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-654118",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-654118",
	                "name.minikube.sigs.k8s.io": "embed-certs-654118",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "49e2d7ac6f0b7433403a9e02f76c19ccaeaa3e1676d41fb879ec5639a6b4e3f1",
	            "SandboxKey": "/var/run/docker/netns/49e2d7ac6f0b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-654118": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eae277504c57bb79a350439d5c756b806a60082b42083657979990253737dde6",
	                    "EndpointID": "8d2449fb69a58971a630c085fbf632f3315958f53c4f2268ff88adc8cda14cba",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "52:c4:16:61:28:af",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-654118",
	                        "c652041fdce0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-654118 -n embed-certs-654118
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-654118 -n embed-certs-654118: exit status 2 (362.992666ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-654118 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-654118 logs -n 25: (1.393393377s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                   ARGS                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-458242 ssh sudo cat /etc/ssl/certs/393125.pem                                                 │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:05 UTC │ 07 Dec 25 23:05 UTC │
	│ ssh     │ functional-458242 ssh sudo crictl images                                                                 │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:04 UTC │ 07 Dec 25 23:04 UTC │
	│ ssh     │ functional-458242 ssh sudo cat /usr/share/ca-certificates/393125.pem                                     │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:05 UTC │ 07 Dec 25 23:05 UTC │
	│ ssh     │ functional-458242 ssh sudo crictl rmi registry.k8s.io/pause:latest                                       │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:04 UTC │ 07 Dec 25 23:04 UTC │
	│ ssh     │ functional-458242 ssh findmnt -T /mount1                                                                 │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:05 UTC │ 07 Dec 25 23:05 UTC │
	│ ssh     │ functional-458242 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:04 UTC │                     │
	│ image   │ functional-458242 image ls                                                                               │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:05 UTC │ 07 Dec 25 23:05 UTC │
	│ cache   │ functional-458242 cache reload                                                                           │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:04 UTC │ 07 Dec 25 23:04 UTC │
	│ ssh     │ functional-458242 ssh sudo cat /etc/ssl/certs/51391683.0                                                 │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:05 UTC │ 07 Dec 25 23:05 UTC │
	│ ssh     │ functional-458242 ssh findmnt -T /mount2                                                                 │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:05 UTC │ 07 Dec 25 23:05 UTC │
	│ ssh     │ functional-458242 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:04 UTC │ 07 Dec 25 23:04 UTC │
	│ image   │ functional-458242 image save --daemon kicbase/echo-server:functional-458242 --alsologtostderr            │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:05 UTC │                     │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                         │ minikube          │ jenkins │ v1.37.0 │ 07 Dec 25 23:04 UTC │ 07 Dec 25 23:04 UTC │
	│ ssh     │ functional-458242 ssh sudo cat /etc/ssl/certs/3931252.pem                                                │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:05 UTC │ 07 Dec 25 23:05 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                      │ minikube          │ jenkins │ v1.37.0 │ 07 Dec 25 23:04 UTC │ 07 Dec 25 23:04 UTC │
	│ ssh     │ functional-458242 ssh findmnt -T /mount3                                                                 │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:05 UTC │ 07 Dec 25 23:05 UTC │
	│ kubectl │ functional-458242 kubectl -- --context functional-458242 get pods                                        │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:04 UTC │ 07 Dec 25 23:04 UTC │
	│ ssh     │ functional-458242 ssh sudo cat /usr/share/ca-certificates/3931252.pem                                    │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:05 UTC │ 07 Dec 25 23:05 UTC │
	│ start   │ -p functional-458242 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:04 UTC │ 07 Dec 25 23:05 UTC │
	│ service │ invalid-svc -p functional-458242                                                                         │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:05 UTC │                     │
	│ mount   │ -p functional-458242 --kill=true                                                                         │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:05 UTC │                     │
	│ cp      │ functional-458242 cp testdata/cp-test.txt /home/docker/cp-test.txt                                       │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:05 UTC │ 07 Dec 25 23:05 UTC │
	│ ssh     │ functional-458242 ssh echo hello                                                                         │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:05 UTC │ 07 Dec 25 23:05 UTC │
	│ config  │ functional-458242 config unset cpus                                                                      │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:05 UTC │ 07 Dec 25 23:05 UTC │
	│ ssh     │ functional-458242 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                 │ functional-458242 │ jenkins │ v1.37.0 │ 07 Dec 25 23:05 UTC │ 07 Dec 25 23:05 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:37:04
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:37:04.722045  687309 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:37:04.722146  687309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:37:04.722151  687309 out.go:374] Setting ErrFile to fd 2...
	I1207 23:37:04.722155  687309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:37:04.722416  687309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:37:04.722887  687309 out.go:368] Setting JSON to false
	I1207 23:37:04.724036  687309 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8369,"bootTime":1765142256,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:37:04.724104  687309 start.go:143] virtualization: kvm guest
	I1207 23:37:04.726136  687309 out.go:179] * [default-k8s-diff-port-312944] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:37:04.727393  687309 notify.go:221] Checking for updates...
	I1207 23:37:04.727408  687309 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:37:04.728657  687309 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:37:04.730027  687309 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:37:04.731379  687309 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:37:04.732624  687309 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:37:04.733762  687309 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:37:04.735574  687309 config.go:182] Loaded profile config "default-k8s-diff-port-312944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:37:04.736385  687309 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:37:04.761948  687309 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:37:04.762056  687309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:37:04.817188  687309 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-07 23:37:04.807477634 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:37:04.817291  687309 docker.go:319] overlay module found
	I1207 23:37:04.820120  687309 out.go:179] * Using the docker driver based on existing profile
	I1207 23:37:04.821288  687309 start.go:309] selected driver: docker
	I1207 23:37:04.821309  687309 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-312944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-312944 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:37:04.821413  687309 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:37:04.821985  687309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:37:04.885662  687309 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-07 23:37:04.874804599 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:37:04.885946  687309 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:37:04.885980  687309 cni.go:84] Creating CNI manager for ""
	I1207 23:37:04.886031  687309 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:37:04.886072  687309 start.go:353] cluster config:
	{Name:default-k8s-diff-port-312944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-312944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:37:04.887849  687309 out.go:179] * Starting "default-k8s-diff-port-312944" primary control-plane node in "default-k8s-diff-port-312944" cluster
	I1207 23:37:04.889015  687309 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:37:04.890364  687309 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:37:04.891508  687309 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:37:04.891547  687309 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1207 23:37:04.891558  687309 cache.go:65] Caching tarball of preloaded images
	I1207 23:37:04.891619  687309 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:37:04.891648  687309 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:37:04.891657  687309 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1207 23:37:04.891747  687309 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/config.json ...
	I1207 23:37:04.914740  687309 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:37:04.914773  687309 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:37:04.914795  687309 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:37:04.914831  687309 start.go:360] acquireMachinesLock for default-k8s-diff-port-312944: {Name:mk446704c0609871a6f2b287c350f0600ce374c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:37:04.914903  687309 start.go:364] duration metric: took 44.996µs to acquireMachinesLock for "default-k8s-diff-port-312944"
	I1207 23:37:04.914924  687309 start.go:96] Skipping create...Using existing machine configuration
	I1207 23:37:04.914931  687309 fix.go:54] fixHost starting: 
	I1207 23:37:04.915230  687309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-312944 --format={{.State.Status}}
	I1207 23:37:04.933868  687309 fix.go:112] recreateIfNeeded on default-k8s-diff-port-312944: state=Stopped err=<nil>
	W1207 23:37:04.933902  687309 fix.go:138] unexpected machine state, will restart: <nil>
	W1207 23:37:00.738011  673565 node_ready.go:57] node "auto-600852" has "Ready":"False" status (will retry)
	I1207 23:37:02.736950  673565 node_ready.go:49] node "auto-600852" is "Ready"
	I1207 23:37:02.736980  673565 node_ready.go:38] duration metric: took 11.002778413s for node "auto-600852" to be "Ready" ...
	I1207 23:37:02.736997  673565 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:37:02.737066  673565 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:37:02.751037  673565 api_server.go:72] duration metric: took 11.345617446s to wait for apiserver process to appear ...
	I1207 23:37:02.751079  673565 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:37:02.751106  673565 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1207 23:37:02.755278  673565 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1207 23:37:02.756387  673565 api_server.go:141] control plane version: v1.34.2
	I1207 23:37:02.756412  673565 api_server.go:131] duration metric: took 5.325955ms to wait for apiserver health ...
	I1207 23:37:02.756420  673565 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:37:02.759924  673565 system_pods.go:59] 8 kube-system pods found
	I1207 23:37:02.759969  673565 system_pods.go:61] "coredns-66bc5c9577-cvkqs" [21e932cc-f500-4e42-a043-59494f1ef96c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:37:02.759978  673565 system_pods.go:61] "etcd-auto-600852" [dfb2cc27-d003-4c95-93c5-ee04651fbc56] Running
	I1207 23:37:02.759997  673565 system_pods.go:61] "kindnet-htd2n" [f0285656-53e9-4405-a905-6c8de6034470] Running
	I1207 23:37:02.760002  673565 system_pods.go:61] "kube-apiserver-auto-600852" [54fd7cf0-fe8c-44ce-bdc9-ea4d438cd061] Running
	I1207 23:37:02.760008  673565 system_pods.go:61] "kube-controller-manager-auto-600852" [45539d0e-185f-4c78-b238-0f776feb4bbb] Running
	I1207 23:37:02.760015  673565 system_pods.go:61] "kube-proxy-smqcr" [81c29963-801c-47a8-ba98-733d78c3b341] Running
	I1207 23:37:02.760020  673565 system_pods.go:61] "kube-scheduler-auto-600852" [f1899c61-58d6-4f1e-8568-a0c69337ce73] Running
	I1207 23:37:02.760030  673565 system_pods.go:61] "storage-provisioner" [eeed8067-2ea0-4f0b-b48f-bbfd0fed14a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:37:02.760038  673565 system_pods.go:74] duration metric: took 3.611353ms to wait for pod list to return data ...
	I1207 23:37:02.760049  673565 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:37:02.762530  673565 default_sa.go:45] found service account: "default"
	I1207 23:37:02.762563  673565 default_sa.go:55] duration metric: took 2.49853ms for default service account to be created ...
	I1207 23:37:02.762574  673565 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 23:37:02.765308  673565 system_pods.go:86] 8 kube-system pods found
	I1207 23:37:02.765366  673565 system_pods.go:89] "coredns-66bc5c9577-cvkqs" [21e932cc-f500-4e42-a043-59494f1ef96c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:37:02.765375  673565 system_pods.go:89] "etcd-auto-600852" [dfb2cc27-d003-4c95-93c5-ee04651fbc56] Running
	I1207 23:37:02.765383  673565 system_pods.go:89] "kindnet-htd2n" [f0285656-53e9-4405-a905-6c8de6034470] Running
	I1207 23:37:02.765388  673565 system_pods.go:89] "kube-apiserver-auto-600852" [54fd7cf0-fe8c-44ce-bdc9-ea4d438cd061] Running
	I1207 23:37:02.765394  673565 system_pods.go:89] "kube-controller-manager-auto-600852" [45539d0e-185f-4c78-b238-0f776feb4bbb] Running
	I1207 23:37:02.765404  673565 system_pods.go:89] "kube-proxy-smqcr" [81c29963-801c-47a8-ba98-733d78c3b341] Running
	I1207 23:37:02.765409  673565 system_pods.go:89] "kube-scheduler-auto-600852" [f1899c61-58d6-4f1e-8568-a0c69337ce73] Running
	I1207 23:37:02.765419  673565 system_pods.go:89] "storage-provisioner" [eeed8067-2ea0-4f0b-b48f-bbfd0fed14a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:37:02.765444  673565 retry.go:31] will retry after 204.835553ms: missing components: kube-dns
	I1207 23:37:02.976001  673565 system_pods.go:86] 8 kube-system pods found
	I1207 23:37:02.976063  673565 system_pods.go:89] "coredns-66bc5c9577-cvkqs" [21e932cc-f500-4e42-a043-59494f1ef96c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:37:02.976071  673565 system_pods.go:89] "etcd-auto-600852" [dfb2cc27-d003-4c95-93c5-ee04651fbc56] Running
	I1207 23:37:02.976086  673565 system_pods.go:89] "kindnet-htd2n" [f0285656-53e9-4405-a905-6c8de6034470] Running
	I1207 23:37:02.976092  673565 system_pods.go:89] "kube-apiserver-auto-600852" [54fd7cf0-fe8c-44ce-bdc9-ea4d438cd061] Running
	I1207 23:37:02.976105  673565 system_pods.go:89] "kube-controller-manager-auto-600852" [45539d0e-185f-4c78-b238-0f776feb4bbb] Running
	I1207 23:37:02.976111  673565 system_pods.go:89] "kube-proxy-smqcr" [81c29963-801c-47a8-ba98-733d78c3b341] Running
	I1207 23:37:02.976120  673565 system_pods.go:89] "kube-scheduler-auto-600852" [f1899c61-58d6-4f1e-8568-a0c69337ce73] Running
	I1207 23:37:02.976128  673565 system_pods.go:89] "storage-provisioner" [eeed8067-2ea0-4f0b-b48f-bbfd0fed14a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:37:02.976152  673565 retry.go:31] will retry after 367.780953ms: missing components: kube-dns
	I1207 23:37:03.347925  673565 system_pods.go:86] 8 kube-system pods found
	I1207 23:37:03.347975  673565 system_pods.go:89] "coredns-66bc5c9577-cvkqs" [21e932cc-f500-4e42-a043-59494f1ef96c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:37:03.347984  673565 system_pods.go:89] "etcd-auto-600852" [dfb2cc27-d003-4c95-93c5-ee04651fbc56] Running
	I1207 23:37:03.347991  673565 system_pods.go:89] "kindnet-htd2n" [f0285656-53e9-4405-a905-6c8de6034470] Running
	I1207 23:37:03.347996  673565 system_pods.go:89] "kube-apiserver-auto-600852" [54fd7cf0-fe8c-44ce-bdc9-ea4d438cd061] Running
	I1207 23:37:03.348002  673565 system_pods.go:89] "kube-controller-manager-auto-600852" [45539d0e-185f-4c78-b238-0f776feb4bbb] Running
	I1207 23:37:03.348014  673565 system_pods.go:89] "kube-proxy-smqcr" [81c29963-801c-47a8-ba98-733d78c3b341] Running
	I1207 23:37:03.348019  673565 system_pods.go:89] "kube-scheduler-auto-600852" [f1899c61-58d6-4f1e-8568-a0c69337ce73] Running
	I1207 23:37:03.348030  673565 system_pods.go:89] "storage-provisioner" [eeed8067-2ea0-4f0b-b48f-bbfd0fed14a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:37:03.348055  673565 retry.go:31] will retry after 327.949085ms: missing components: kube-dns
	I1207 23:37:03.680051  673565 system_pods.go:86] 8 kube-system pods found
	I1207 23:37:03.680084  673565 system_pods.go:89] "coredns-66bc5c9577-cvkqs" [21e932cc-f500-4e42-a043-59494f1ef96c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:37:03.680091  673565 system_pods.go:89] "etcd-auto-600852" [dfb2cc27-d003-4c95-93c5-ee04651fbc56] Running
	I1207 23:37:03.680097  673565 system_pods.go:89] "kindnet-htd2n" [f0285656-53e9-4405-a905-6c8de6034470] Running
	I1207 23:37:03.680100  673565 system_pods.go:89] "kube-apiserver-auto-600852" [54fd7cf0-fe8c-44ce-bdc9-ea4d438cd061] Running
	I1207 23:37:03.680104  673565 system_pods.go:89] "kube-controller-manager-auto-600852" [45539d0e-185f-4c78-b238-0f776feb4bbb] Running
	I1207 23:37:03.680107  673565 system_pods.go:89] "kube-proxy-smqcr" [81c29963-801c-47a8-ba98-733d78c3b341] Running
	I1207 23:37:03.680110  673565 system_pods.go:89] "kube-scheduler-auto-600852" [f1899c61-58d6-4f1e-8568-a0c69337ce73] Running
	I1207 23:37:03.680117  673565 system_pods.go:89] "storage-provisioner" [eeed8067-2ea0-4f0b-b48f-bbfd0fed14a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:37:03.680134  673565 retry.go:31] will retry after 591.359397ms: missing components: kube-dns
	I1207 23:37:04.276928  673565 system_pods.go:86] 8 kube-system pods found
	I1207 23:37:04.276964  673565 system_pods.go:89] "coredns-66bc5c9577-cvkqs" [21e932cc-f500-4e42-a043-59494f1ef96c] Running
	I1207 23:37:04.276974  673565 system_pods.go:89] "etcd-auto-600852" [dfb2cc27-d003-4c95-93c5-ee04651fbc56] Running
	I1207 23:37:04.276980  673565 system_pods.go:89] "kindnet-htd2n" [f0285656-53e9-4405-a905-6c8de6034470] Running
	I1207 23:37:04.276985  673565 system_pods.go:89] "kube-apiserver-auto-600852" [54fd7cf0-fe8c-44ce-bdc9-ea4d438cd061] Running
	I1207 23:37:04.276992  673565 system_pods.go:89] "kube-controller-manager-auto-600852" [45539d0e-185f-4c78-b238-0f776feb4bbb] Running
	I1207 23:37:04.277000  673565 system_pods.go:89] "kube-proxy-smqcr" [81c29963-801c-47a8-ba98-733d78c3b341] Running
	I1207 23:37:04.277005  673565 system_pods.go:89] "kube-scheduler-auto-600852" [f1899c61-58d6-4f1e-8568-a0c69337ce73] Running
	I1207 23:37:04.277010  673565 system_pods.go:89] "storage-provisioner" [eeed8067-2ea0-4f0b-b48f-bbfd0fed14a7] Running
	I1207 23:37:04.277020  673565 system_pods.go:126] duration metric: took 1.514439586s to wait for k8s-apps to be running ...
	I1207 23:37:04.277034  673565 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:37:04.277088  673565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:37:04.291684  673565 system_svc.go:56] duration metric: took 14.64018ms WaitForService to wait for kubelet
	I1207 23:37:04.291718  673565 kubeadm.go:587] duration metric: took 12.886306227s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:37:04.291743  673565 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:37:04.294886  673565 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:37:04.294923  673565 node_conditions.go:123] node cpu capacity is 8
	I1207 23:37:04.294946  673565 node_conditions.go:105] duration metric: took 3.196073ms to run NodePressure ...
	I1207 23:37:04.294964  673565 start.go:242] waiting for startup goroutines ...
	I1207 23:37:04.294980  673565 start.go:247] waiting for cluster config update ...
	I1207 23:37:04.295000  673565 start.go:256] writing updated cluster config ...
	I1207 23:37:04.295467  673565 ssh_runner.go:195] Run: rm -f paused
	I1207 23:37:04.299846  673565 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:37:04.304102  673565 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cvkqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:04.309006  673565 pod_ready.go:94] pod "coredns-66bc5c9577-cvkqs" is "Ready"
	I1207 23:37:04.309034  673565 pod_ready.go:86] duration metric: took 4.900761ms for pod "coredns-66bc5c9577-cvkqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:04.311296  673565 pod_ready.go:83] waiting for pod "etcd-auto-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:04.315648  673565 pod_ready.go:94] pod "etcd-auto-600852" is "Ready"
	I1207 23:37:04.315672  673565 pod_ready.go:86] duration metric: took 4.352934ms for pod "etcd-auto-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:04.317910  673565 pod_ready.go:83] waiting for pod "kube-apiserver-auto-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:04.322193  673565 pod_ready.go:94] pod "kube-apiserver-auto-600852" is "Ready"
	I1207 23:37:04.322221  673565 pod_ready.go:86] duration metric: took 4.280832ms for pod "kube-apiserver-auto-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:04.324442  673565 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:04.704268  673565 pod_ready.go:94] pod "kube-controller-manager-auto-600852" is "Ready"
	I1207 23:37:04.704299  673565 pod_ready.go:86] duration metric: took 379.836962ms for pod "kube-controller-manager-auto-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:04.904744  673565 pod_ready.go:83] waiting for pod "kube-proxy-smqcr" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:05.304822  673565 pod_ready.go:94] pod "kube-proxy-smqcr" is "Ready"
	I1207 23:37:05.304848  673565 pod_ready.go:86] duration metric: took 400.076972ms for pod "kube-proxy-smqcr" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:05.505043  673565 pod_ready.go:83] waiting for pod "kube-scheduler-auto-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:05.904580  673565 pod_ready.go:94] pod "kube-scheduler-auto-600852" is "Ready"
	I1207 23:37:05.904608  673565 pod_ready.go:86] duration metric: took 399.534639ms for pod "kube-scheduler-auto-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:05.904620  673565 pod_ready.go:40] duration metric: took 1.604737225s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:37:05.952021  673565 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1207 23:37:05.954352  673565 out.go:179] * Done! kubectl is now configured to use "auto-600852" cluster and "default" namespace by default
	W1207 23:37:05.068234  673247 pod_ready.go:104] pod "coredns-66bc5c9577-wvgqf" is not "Ready", error: <nil>
	W1207 23:37:07.559644  673247 pod_ready.go:104] pod "coredns-66bc5c9577-wvgqf" is not "Ready", error: <nil>
	I1207 23:37:04.935946  687309 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-312944" ...
	I1207 23:37:04.936017  687309 cli_runner.go:164] Run: docker start default-k8s-diff-port-312944
	I1207 23:37:05.208710  687309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-312944 --format={{.State.Status}}
	I1207 23:37:05.229083  687309 kic.go:430] container "default-k8s-diff-port-312944" state is running.
	I1207 23:37:05.229522  687309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-312944
	I1207 23:37:05.249428  687309 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/config.json ...
	I1207 23:37:05.249661  687309 machine.go:94] provisionDockerMachine start ...
	I1207 23:37:05.249725  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:05.269354  687309 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:05.269708  687309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1207 23:37:05.269727  687309 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:37:05.270396  687309 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37942->127.0.0.1:33483: read: connection reset by peer
	I1207 23:37:08.422504  687309 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-312944
	
	I1207 23:37:08.422536  687309 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-312944"
	I1207 23:37:08.422599  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:08.448957  687309 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:08.449395  687309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1207 23:37:08.449418  687309 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-312944 && echo "default-k8s-diff-port-312944" | sudo tee /etc/hostname
	I1207 23:37:08.605519  687309 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-312944
	
	I1207 23:37:08.605821  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:08.629868  687309 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:08.630212  687309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1207 23:37:08.630245  687309 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-312944' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-312944/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-312944' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:37:08.776693  687309 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:37:08.776724  687309 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:37:08.776748  687309 ubuntu.go:190] setting up certificates
	I1207 23:37:08.776760  687309 provision.go:84] configureAuth start
	I1207 23:37:08.776845  687309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-312944
	I1207 23:37:08.802259  687309 provision.go:143] copyHostCerts
	I1207 23:37:08.802351  687309 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:37:08.802363  687309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:37:08.802460  687309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:37:08.802621  687309 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:37:08.802637  687309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:37:08.802684  687309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:37:08.802819  687309 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:37:08.802832  687309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:37:08.803451  687309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:37:08.803609  687309 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-312944 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-312944 localhost minikube]
	I1207 23:37:08.924820  687309 provision.go:177] copyRemoteCerts
	I1207 23:37:08.924880  687309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:37:08.924914  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:08.943947  687309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:37:09.051100  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1207 23:37:09.084406  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 23:37:09.104116  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:37:09.122448  687309 provision.go:87] duration metric: took 345.672125ms to configureAuth
	I1207 23:37:09.122485  687309 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:37:09.122723  687309 config.go:182] Loaded profile config "default-k8s-diff-port-312944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:37:09.122898  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:09.152527  687309 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:09.152839  687309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1207 23:37:09.152875  687309 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:37:09.783862  687309 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:37:09.783893  687309 machine.go:97] duration metric: took 4.534215722s to provisionDockerMachine
	I1207 23:37:09.783906  687309 start.go:293] postStartSetup for "default-k8s-diff-port-312944" (driver="docker")
	I1207 23:37:09.783922  687309 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:37:09.784000  687309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:37:09.784050  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:09.804027  687309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:37:09.899254  687309 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:37:09.903022  687309 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:37:09.903046  687309 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:37:09.903058  687309 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:37:09.903108  687309 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:37:09.903182  687309 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:37:09.903269  687309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:37:09.911506  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:37:09.931221  687309 start.go:296] duration metric: took 147.295974ms for postStartSetup
	I1207 23:37:09.931387  687309 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:37:09.931476  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:09.950851  687309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:37:10.042778  687309 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:37:10.047507  687309 fix.go:56] duration metric: took 5.132570353s for fixHost
	I1207 23:37:10.047531  687309 start.go:83] releasing machines lock for "default-k8s-diff-port-312944", held for 5.132616614s
	I1207 23:37:10.047599  687309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-312944
	I1207 23:37:10.066677  687309 ssh_runner.go:195] Run: cat /version.json
	I1207 23:37:10.066749  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:10.066759  687309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:37:10.066839  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:10.086685  687309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:37:10.087600  687309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:37:10.253926  687309 ssh_runner.go:195] Run: systemctl --version
	I1207 23:37:10.261628  687309 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:37:10.303664  687309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:37:10.309275  687309 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:37:10.309439  687309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:37:10.319421  687309 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1207 23:37:10.319452  687309 start.go:496] detecting cgroup driver to use...
	I1207 23:37:10.319490  687309 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:37:10.319538  687309 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:37:10.337147  687309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:37:10.354063  687309 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:37:10.354131  687309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:37:10.371992  687309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:37:10.389168  687309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:37:10.495834  687309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:37:10.599210  687309 docker.go:234] disabling docker service ...
	I1207 23:37:10.599293  687309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:37:10.617012  687309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:37:10.632804  687309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:37:10.734587  687309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:37:10.824989  687309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:37:10.840644  687309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:37:10.856737  687309 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:37:10.856811  687309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:10.866390  687309 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:37:10.866468  687309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:10.875807  687309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:10.885215  687309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:10.895379  687309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:37:10.904010  687309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:10.914008  687309 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:10.923534  687309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:10.932953  687309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:37:10.940895  687309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:37:10.948481  687309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:37:11.033722  687309 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:37:11.170187  687309 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:37:11.170272  687309 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:37:11.174951  687309 start.go:564] Will wait 60s for crictl version
	I1207 23:37:11.175003  687309 ssh_runner.go:195] Run: which crictl
	I1207 23:37:11.179346  687309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:37:11.211932  687309 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:37:11.212027  687309 ssh_runner.go:195] Run: crio --version
	I1207 23:37:11.243710  687309 ssh_runner.go:195] Run: crio --version
	I1207 23:37:11.274750  687309 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1207 23:37:11.276028  687309 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-312944 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:37:11.295763  687309 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1207 23:37:11.300888  687309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:37:11.313373  687309 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-312944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-312944 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:37:11.313543  687309 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:37:11.313601  687309 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:37:11.348665  687309 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:37:11.348694  687309 crio.go:433] Images already preloaded, skipping extraction
	I1207 23:37:11.348753  687309 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:37:11.374438  687309 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:37:11.374462  687309 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:37:11.374470  687309 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.2 crio true true} ...
	I1207 23:37:11.374587  687309 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-312944 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-312944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1207 23:37:11.374665  687309 ssh_runner.go:195] Run: crio config
	I1207 23:37:11.422172  687309 cni.go:84] Creating CNI manager for ""
	I1207 23:37:11.422195  687309 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 23:37:11.422219  687309 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 23:37:11.422239  687309 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-312944 NodeName:default-k8s-diff-port-312944 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:37:11.422411  687309 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-312944"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:37:11.422493  687309 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:37:11.431321  687309 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:37:11.431425  687309 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 23:37:11.439544  687309 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1207 23:37:11.452861  687309 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:37:11.466957  687309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1207 23:37:11.480742  687309 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1207 23:37:11.485173  687309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:37:11.495563  687309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:37:11.581098  687309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:37:11.606983  687309 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944 for IP: 192.168.94.2
	I1207 23:37:11.607006  687309 certs.go:195] generating shared ca certs ...
	I1207 23:37:11.607065  687309 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:11.607229  687309 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:37:11.607291  687309 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:37:11.607307  687309 certs.go:257] generating profile certs ...
	I1207 23:37:11.607441  687309 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/client.key
	I1207 23:37:11.607528  687309 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.key.025605fa
	I1207 23:37:11.607598  687309 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/proxy-client.key
	I1207 23:37:11.607714  687309 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:37:11.607747  687309 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:37:11.607757  687309 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:37:11.607787  687309 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:37:11.607811  687309 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:37:11.607833  687309 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:37:11.607902  687309 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:37:11.608582  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:37:11.629973  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:37:11.650005  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:37:11.671965  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:37:11.702669  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1207 23:37:11.724166  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 23:37:11.750220  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:37:11.769521  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/default-k8s-diff-port-312944/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 23:37:11.787613  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:37:11.805628  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:37:11.826834  687309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:37:11.845813  687309 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:37:11.861618  687309 ssh_runner.go:195] Run: openssl version
	I1207 23:37:11.868699  687309 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:11.877649  687309 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:37:11.886218  687309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:11.890549  687309 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:11.890608  687309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:11.938894  687309 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:37:11.950819  687309 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:37:11.962180  687309 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:37:11.972501  687309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:37:11.976373  687309 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:37:11.976428  687309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:37:12.012377  687309 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:37:12.021807  687309 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:37:12.031638  687309 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:37:12.041611  687309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:37:12.046159  687309 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:37:12.046230  687309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:37:12.101484  687309 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:37:12.111349  687309 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:37:12.117036  687309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 23:37:12.164285  687309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 23:37:12.217271  687309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 23:37:12.271890  687309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 23:37:12.317028  687309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 23:37:12.354360  687309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 23:37:12.393769  687309 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-312944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-312944 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:37:12.393881  687309 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:37:12.393943  687309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:37:12.428893  687309 cri.go:89] found id: "362b83f015210f03925637b1b0598b825d674607d060c054cf459ff6794854a5"
	I1207 23:37:12.428918  687309 cri.go:89] found id: "fa639c7294ee1af933ce6c68db15470c1c2d5d2c404c5e0568eaac61e7ede373"
	I1207 23:37:12.428924  687309 cri.go:89] found id: "b04410a9187c7167576fa7f9cb5bf5a761981c61b37ea3b68eb353c721baab8f"
	I1207 23:37:12.428935  687309 cri.go:89] found id: "f27c08f4d2ee8d8898a367bb16db44c1f22130d15e95d71881aa776e8567269c"
	I1207 23:37:12.428939  687309 cri.go:89] found id: ""
	I1207 23:37:12.428990  687309 ssh_runner.go:195] Run: sudo runc list -f json
	W1207 23:37:12.441736  687309 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:37:12Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:37:12.441834  687309 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:37:12.450163  687309 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1207 23:37:12.450191  687309 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1207 23:37:12.450250  687309 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 23:37:12.459014  687309 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:37:12.460119  687309 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-312944" does not appear in /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:37:12.460912  687309 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-389542/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-312944" cluster setting kubeconfig missing "default-k8s-diff-port-312944" context setting]
	I1207 23:37:12.461997  687309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:12.464184  687309 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 23:37:12.473882  687309 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1207 23:37:12.473916  687309 kubeadm.go:602] duration metric: took 23.717856ms to restartPrimaryControlPlane
	I1207 23:37:12.473927  687309 kubeadm.go:403] duration metric: took 80.176844ms to StartCluster
	I1207 23:37:12.473946  687309 settings.go:142] acquiring lock: {Name:mk372e79badb9c8f25216fa891cff6dfa96ea2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:12.474025  687309 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:37:12.475543  687309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:12.475799  687309 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:37:12.475875  687309 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1207 23:37:12.475986  687309 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-312944"
	I1207 23:37:12.476013  687309 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-312944"
	W1207 23:37:12.476025  687309 addons.go:248] addon storage-provisioner should already be in state true
	I1207 23:37:12.476033  687309 config.go:182] Loaded profile config "default-k8s-diff-port-312944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:37:12.476036  687309 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-312944"
	I1207 23:37:12.476054  687309 host.go:66] Checking if "default-k8s-diff-port-312944" exists ...
	I1207 23:37:12.476060  687309 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-312944"
	W1207 23:37:12.476072  687309 addons.go:248] addon dashboard should already be in state true
	I1207 23:37:12.476109  687309 host.go:66] Checking if "default-k8s-diff-port-312944" exists ...
	I1207 23:37:12.476036  687309 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-312944"
	I1207 23:37:12.476163  687309 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-312944"
	I1207 23:37:12.476455  687309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-312944 --format={{.State.Status}}
	I1207 23:37:12.476584  687309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-312944 --format={{.State.Status}}
	I1207 23:37:12.476605  687309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-312944 --format={{.State.Status}}
	I1207 23:37:12.478079  687309 out.go:179] * Verifying Kubernetes components...
	I1207 23:37:12.479378  687309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:37:12.505087  687309 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 23:37:12.506133  687309 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1207 23:37:12.506162  687309 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:37:12.506423  687309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 23:37:12.506502  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:12.508618  687309 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1207 23:37:12.682029  684670 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1207 23:37:12.682124  684670 kubeadm.go:319] [preflight] Running pre-flight checks
	I1207 23:37:12.682251  684670 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1207 23:37:12.682398  684670 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1207 23:37:12.682468  684670 kubeadm.go:319] OS: Linux
	I1207 23:37:12.682540  684670 kubeadm.go:319] CGROUPS_CPU: enabled
	I1207 23:37:12.682599  684670 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1207 23:37:12.682666  684670 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1207 23:37:12.682724  684670 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1207 23:37:12.682792  684670 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1207 23:37:12.682865  684670 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1207 23:37:12.682936  684670 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1207 23:37:12.683014  684670 kubeadm.go:319] CGROUPS_IO: enabled
	I1207 23:37:12.683127  684670 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 23:37:12.683256  684670 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 23:37:12.683423  684670 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1207 23:37:12.683543  684670 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 23:37:12.685463  684670 out.go:252]   - Generating certificates and keys ...
	I1207 23:37:12.685567  684670 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1207 23:37:12.685660  684670 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1207 23:37:12.685748  684670 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 23:37:12.685829  684670 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1207 23:37:12.685908  684670 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1207 23:37:12.685975  684670 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1207 23:37:12.686046  684670 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1207 23:37:12.686198  684670 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-600852 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1207 23:37:12.686272  684670 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1207 23:37:12.686446  684670 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-600852 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1207 23:37:12.686530  684670 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 23:37:12.686614  684670 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 23:37:12.686673  684670 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1207 23:37:12.686741  684670 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 23:37:12.687027  684670 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 23:37:12.687117  684670 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 23:37:12.687184  684670 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 23:37:12.687261  684670 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 23:37:12.687433  684670 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 23:37:12.687586  684670 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 23:37:12.687693  684670 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 23:37:12.689051  684670 out.go:252]   - Booting up control plane ...
	I1207 23:37:12.689204  684670 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 23:37:12.689307  684670 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 23:37:12.689668  684670 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 23:37:12.690053  684670 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 23:37:12.690319  684670 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1207 23:37:12.690569  684670 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1207 23:37:12.690695  684670 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 23:37:12.690775  684670 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1207 23:37:12.691018  684670 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1207 23:37:12.691172  684670 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1207 23:37:12.691248  684670 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.382697ms
	I1207 23:37:12.691379  684670 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1207 23:37:12.691479  684670 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1207 23:37:12.691595  684670 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1207 23:37:12.691690  684670 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1207 23:37:12.691789  684670 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.056198801s
	I1207 23:37:12.691873  684670 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.432500697s
	I1207 23:37:12.691967  684670 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502161176s
	I1207 23:37:12.692101  684670 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 23:37:12.692255  684670 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 23:37:12.692335  684670 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 23:37:12.692587  684670 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-600852 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 23:37:12.692669  684670 kubeadm.go:319] [bootstrap-token] Using token: kh1i16.e5yldh6cwcmarzt4
	I1207 23:37:12.695079  684670 out.go:252]   - Configuring RBAC rules ...
	I1207 23:37:12.695222  684670 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 23:37:12.695352  684670 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 23:37:12.695537  684670 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 23:37:12.695698  684670 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 23:37:12.695841  684670 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 23:37:12.695947  684670 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 23:37:12.696107  684670 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 23:37:12.696169  684670 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1207 23:37:12.696231  684670 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1207 23:37:12.696250  684670 kubeadm.go:319] 
	I1207 23:37:12.696319  684670 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1207 23:37:12.696339  684670 kubeadm.go:319] 
	I1207 23:37:12.696431  684670 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1207 23:37:12.696442  684670 kubeadm.go:319] 
	I1207 23:37:12.696474  684670 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1207 23:37:12.696556  684670 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 23:37:12.696621  684670 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 23:37:12.696631  684670 kubeadm.go:319] 
	I1207 23:37:12.696693  684670 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1207 23:37:12.696702  684670 kubeadm.go:319] 
	I1207 23:37:12.696760  684670 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 23:37:12.696770  684670 kubeadm.go:319] 
	I1207 23:37:12.696833  684670 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1207 23:37:12.696926  684670 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 23:37:12.697022  684670 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 23:37:12.697031  684670 kubeadm.go:319] 
	I1207 23:37:12.697125  684670 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 23:37:12.697236  684670 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1207 23:37:12.697248  684670 kubeadm.go:319] 
	I1207 23:37:12.697407  684670 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kh1i16.e5yldh6cwcmarzt4 \
	I1207 23:37:12.697570  684670 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a6f9ffe32c21ad638ebba2743e15f014ccba55b6baef971adb92cbf8edf27a49 \
	I1207 23:37:12.697597  684670 kubeadm.go:319] 	--control-plane 
	I1207 23:37:12.697602  684670 kubeadm.go:319] 
	I1207 23:37:12.697708  684670 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1207 23:37:12.697714  684670 kubeadm.go:319] 
	I1207 23:37:12.697823  684670 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kh1i16.e5yldh6cwcmarzt4 \
	I1207 23:37:12.697972  684670 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a6f9ffe32c21ad638ebba2743e15f014ccba55b6baef971adb92cbf8edf27a49 
	I1207 23:37:12.697988  684670 cni.go:84] Creating CNI manager for "kindnet"
	I1207 23:37:12.699540  684670 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1207 23:37:10.057107  673247 pod_ready.go:104] pod "coredns-66bc5c9577-wvgqf" is not "Ready", error: <nil>
	I1207 23:37:12.057979  673247 pod_ready.go:94] pod "coredns-66bc5c9577-wvgqf" is "Ready"
	I1207 23:37:12.058008  673247 pod_ready.go:86] duration metric: took 41.006545623s for pod "coredns-66bc5c9577-wvgqf" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:12.065711  673247 pod_ready.go:83] waiting for pod "etcd-embed-certs-654118" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:12.077874  673247 pod_ready.go:94] pod "etcd-embed-certs-654118" is "Ready"
	I1207 23:37:12.077910  673247 pod_ready.go:86] duration metric: took 12.105816ms for pod "etcd-embed-certs-654118" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:12.080983  673247 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-654118" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:12.086764  673247 pod_ready.go:94] pod "kube-apiserver-embed-certs-654118" is "Ready"
	I1207 23:37:12.086795  673247 pod_ready.go:86] duration metric: took 5.779168ms for pod "kube-apiserver-embed-certs-654118" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:12.089056  673247 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-654118" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:12.255583  673247 pod_ready.go:94] pod "kube-controller-manager-embed-certs-654118" is "Ready"
	I1207 23:37:12.255617  673247 pod_ready.go:86] duration metric: took 166.534029ms for pod "kube-controller-manager-embed-certs-654118" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:12.456117  673247 pod_ready.go:83] waiting for pod "kube-proxy-l75b2" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:12.855734  673247 pod_ready.go:94] pod "kube-proxy-l75b2" is "Ready"
	I1207 23:37:12.855768  673247 pod_ready.go:86] duration metric: took 399.618817ms for pod "kube-proxy-l75b2" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:13.055683  673247 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-654118" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:13.455124  673247 pod_ready.go:94] pod "kube-scheduler-embed-certs-654118" is "Ready"
	I1207 23:37:13.455158  673247 pod_ready.go:86] duration metric: took 399.446873ms for pod "kube-scheduler-embed-certs-654118" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:13.455174  673247 pod_ready.go:40] duration metric: took 42.409128438s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:37:13.511191  673247 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1207 23:37:13.515463  673247 out.go:179] * Done! kubectl is now configured to use "embed-certs-654118" cluster and "default" namespace by default
	I1207 23:37:12.510784  687309 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-312944"
	W1207 23:37:12.510809  687309 addons.go:248] addon default-storageclass should already be in state true
	I1207 23:37:12.510842  687309 host.go:66] Checking if "default-k8s-diff-port-312944" exists ...
	I1207 23:37:12.511320  687309 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-312944 --format={{.State.Status}}
	I1207 23:37:12.524472  687309 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1207 23:37:12.524510  687309 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1207 23:37:12.524594  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:12.544266  687309 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 23:37:12.544296  687309 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 23:37:12.544395  687309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:37:12.549413  687309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:37:12.556644  687309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:37:12.571882  687309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:37:12.637225  687309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:37:12.651683  687309 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-312944" to be "Ready" ...
	I1207 23:37:12.666169  687309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:37:12.668149  687309 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1207 23:37:12.668178  687309 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1207 23:37:12.684512  687309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 23:37:12.689736  687309 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1207 23:37:12.689808  687309 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1207 23:37:12.711585  687309 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1207 23:37:12.711639  687309 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1207 23:37:12.738126  687309 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1207 23:37:12.738153  687309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1207 23:37:12.756490  687309 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1207 23:37:12.756517  687309 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1207 23:37:12.775895  687309 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1207 23:37:12.775923  687309 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1207 23:37:12.794028  687309 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1207 23:37:12.794094  687309 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1207 23:37:12.810198  687309 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1207 23:37:12.810228  687309 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1207 23:37:12.830550  687309 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1207 23:37:12.830580  687309 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1207 23:37:12.845228  687309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1207 23:37:14.326249  687309 node_ready.go:49] node "default-k8s-diff-port-312944" is "Ready"
	I1207 23:37:14.326294  687309 node_ready.go:38] duration metric: took 1.674580102s for node "default-k8s-diff-port-312944" to be "Ready" ...
	I1207 23:37:14.326312  687309 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:37:14.326451  687309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:37:14.982123  687309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.315915331s)
	I1207 23:37:14.982200  687309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.297655548s)
	I1207 23:37:14.982463  687309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.137172371s)
	I1207 23:37:14.982514  687309 api_server.go:72] duration metric: took 2.506683292s to wait for apiserver process to appear ...
	I1207 23:37:14.982530  687309 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:37:14.982554  687309 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1207 23:37:14.985404  687309 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-312944 addons enable metrics-server
	
	I1207 23:37:14.988142  687309 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1207 23:37:14.988171  687309 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1207 23:37:14.992276  687309 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1207 23:37:12.701068  684670 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1207 23:37:12.707503  684670 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1207 23:37:12.707530  684670 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1207 23:37:12.732102  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1207 23:37:13.011122  684670 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 23:37:13.011195  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:13.011199  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-600852 minikube.k8s.io/updated_at=2025_12_07T23_37_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47 minikube.k8s.io/name=kindnet-600852 minikube.k8s.io/primary=true
	I1207 23:37:13.023647  684670 ops.go:34] apiserver oom_adj: -16
	I1207 23:37:13.094742  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:13.595442  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:14.095554  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:14.595229  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:15.095535  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:15.595011  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:16.095093  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:16.594870  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:17.094842  684670 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:17.166708  684670 kubeadm.go:1114] duration metric: took 4.155583217s to wait for elevateKubeSystemPrivileges
	I1207 23:37:17.166756  684670 kubeadm.go:403] duration metric: took 16.092913541s to StartCluster
	I1207 23:37:17.166778  684670 settings.go:142] acquiring lock: {Name:mk372e79badb9c8f25216fa891cff6dfa96ea2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:17.166846  684670 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:37:17.168859  684670 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:17.169139  684670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 23:37:17.169148  684670 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:37:17.169221  684670 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1207 23:37:17.169336  684670 addons.go:70] Setting storage-provisioner=true in profile "kindnet-600852"
	I1207 23:37:17.169359  684670 addons.go:239] Setting addon storage-provisioner=true in "kindnet-600852"
	I1207 23:37:17.169379  684670 addons.go:70] Setting default-storageclass=true in profile "kindnet-600852"
	I1207 23:37:17.169399  684670 config.go:182] Loaded profile config "kindnet-600852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:37:17.169411  684670 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-600852"
	I1207 23:37:17.169395  684670 host.go:66] Checking if "kindnet-600852" exists ...
	I1207 23:37:17.169855  684670 cli_runner.go:164] Run: docker container inspect kindnet-600852 --format={{.State.Status}}
	I1207 23:37:17.170020  684670 cli_runner.go:164] Run: docker container inspect kindnet-600852 --format={{.State.Status}}
	I1207 23:37:17.170716  684670 out.go:179] * Verifying Kubernetes components...
	I1207 23:37:17.172437  684670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:37:17.194276  684670 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 23:37:17.195154  684670 addons.go:239] Setting addon default-storageclass=true in "kindnet-600852"
	I1207 23:37:17.195205  684670 host.go:66] Checking if "kindnet-600852" exists ...
	I1207 23:37:17.195707  684670 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:37:17.195729  684670 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 23:37:17.195779  684670 cli_runner.go:164] Run: docker container inspect kindnet-600852 --format={{.State.Status}}
	I1207 23:37:17.195790  684670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-600852
	I1207 23:37:17.227629  684670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/kindnet-600852/id_rsa Username:docker}
	I1207 23:37:17.230378  684670 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 23:37:17.230611  684670 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 23:37:17.230707  684670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-600852
	I1207 23:37:17.268458  684670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/kindnet-600852/id_rsa Username:docker}
	I1207 23:37:17.282470  684670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 23:37:17.330457  684670 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:37:17.348754  684670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:37:17.382906  684670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 23:37:17.451617  684670 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1207 23:37:17.453385  684670 node_ready.go:35] waiting up to 15m0s for node "kindnet-600852" to be "Ready" ...
	I1207 23:37:17.668581  684670 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1207 23:37:14.995259  687309 addons.go:530] duration metric: took 2.51938942s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1207 23:37:15.483502  687309 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1207 23:37:15.488380  687309 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1207 23:37:15.488409  687309 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1207 23:37:15.982675  687309 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1207 23:37:15.989607  687309 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1207 23:37:15.990620  687309 api_server.go:141] control plane version: v1.34.2
	I1207 23:37:15.990646  687309 api_server.go:131] duration metric: took 1.008108817s to wait for apiserver health ...
	I1207 23:37:15.990655  687309 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:37:15.994192  687309 system_pods.go:59] 8 kube-system pods found
	I1207 23:37:15.994244  687309 system_pods.go:61] "coredns-66bc5c9577-p4v2f" [113d6978-708b-4941-acbc-0fa4a639f318] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:37:15.994259  687309 system_pods.go:61] "etcd-default-k8s-diff-port-312944" [569e31ea-e77d-4156-a9f2-0970afca17bd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:37:15.994270  687309 system_pods.go:61] "kindnet-55xbl" [627ffd8d-a2eb-4d9c-b1bc-a71f609273bc] Running
	I1207 23:37:15.994291  687309 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-312944" [a2d3f5cd-a118-448c-a233-a6fe616b5b6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:37:15.994305  687309 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-312944" [b5eaf61f-ba8d-4d44-8f2c-eb9ebae5e285] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:37:15.994312  687309 system_pods.go:61] "kube-proxy-7stg5" [b7e00d0a-bd16-45c1-a58c-e0569a0bcb33] Running
	I1207 23:37:15.994335  687309 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-312944" [ddd21134-7272-4134-8cc5-5fd8abb6abf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:37:15.994340  687309 system_pods.go:61] "storage-provisioner" [adffbdc2-708d-4f45-9f91-1697332156e3] Running
	I1207 23:37:15.994349  687309 system_pods.go:74] duration metric: took 3.6871ms to wait for pod list to return data ...
	I1207 23:37:15.994359  687309 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:37:15.996719  687309 default_sa.go:45] found service account: "default"
	I1207 23:37:15.996740  687309 default_sa.go:55] duration metric: took 2.371119ms for default service account to be created ...
	I1207 23:37:15.996750  687309 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 23:37:15.999790  687309 system_pods.go:86] 8 kube-system pods found
	I1207 23:37:15.999816  687309 system_pods.go:89] "coredns-66bc5c9577-p4v2f" [113d6978-708b-4941-acbc-0fa4a639f318] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:37:15.999824  687309 system_pods.go:89] "etcd-default-k8s-diff-port-312944" [569e31ea-e77d-4156-a9f2-0970afca17bd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 23:37:15.999831  687309 system_pods.go:89] "kindnet-55xbl" [627ffd8d-a2eb-4d9c-b1bc-a71f609273bc] Running
	I1207 23:37:15.999839  687309 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-312944" [a2d3f5cd-a118-448c-a233-a6fe616b5b6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1207 23:37:15.999852  687309 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-312944" [b5eaf61f-ba8d-4d44-8f2c-eb9ebae5e285] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 23:37:15.999881  687309 system_pods.go:89] "kube-proxy-7stg5" [b7e00d0a-bd16-45c1-a58c-e0569a0bcb33] Running
	I1207 23:37:15.999889  687309 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-312944" [ddd21134-7272-4134-8cc5-5fd8abb6abf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1207 23:37:15.999895  687309 system_pods.go:89] "storage-provisioner" [adffbdc2-708d-4f45-9f91-1697332156e3] Running
	I1207 23:37:15.999903  687309 system_pods.go:126] duration metric: took 3.146331ms to wait for k8s-apps to be running ...
	I1207 23:37:15.999911  687309 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:37:15.999966  687309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:37:16.014472  687309 system_svc.go:56] duration metric: took 14.550113ms WaitForService to wait for kubelet
	I1207 23:37:16.014510  687309 kubeadm.go:587] duration metric: took 3.538682419s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:37:16.014536  687309 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:37:16.017949  687309 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:37:16.017980  687309 node_conditions.go:123] node cpu capacity is 8
	I1207 23:37:16.017996  687309 node_conditions.go:105] duration metric: took 3.454545ms to run NodePressure ...
	I1207 23:37:16.018012  687309 start.go:242] waiting for startup goroutines ...
	I1207 23:37:16.018019  687309 start.go:247] waiting for cluster config update ...
	I1207 23:37:16.018030  687309 start.go:256] writing updated cluster config ...
	I1207 23:37:16.018338  687309 ssh_runner.go:195] Run: rm -f paused
	I1207 23:37:16.022608  687309 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:37:16.026747  687309 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p4v2f" in "kube-system" namespace to be "Ready" or be gone ...
	W1207 23:37:18.033653  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	I1207 23:37:17.669990  684670 addons.go:530] duration metric: took 500.771902ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1207 23:37:17.955540  684670 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-600852" context rescaled to 1 replicas
	W1207 23:37:19.457758  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:21.958282  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:20.532841  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	W1207 23:37:22.534300  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	W1207 23:37:24.536204  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	W1207 23:37:24.458362  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:26.958204  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:27.033760  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	W1207 23:37:29.533443  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 07 23:37:00 embed-certs-654118 crio[570]: time="2025-12-07T23:37:00.027155667Z" level=info msg="Started container" PID=1754 containerID=875b7b94a37e52c746df5e05f215dfa5f1c92f794887cacf6865c3d4f41b062e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p/dashboard-metrics-scraper id=03b20226-72b5-4847-a476-debd2dbfc4cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=576929cd3297cf2a4ffc1b4dc1da0f6e5fa38c66dc9f1bcdc87a647aafdad827
	Dec 07 23:37:00 embed-certs-654118 crio[570]: time="2025-12-07T23:37:00.104320785Z" level=info msg="Removing container: e9239524be180388617e185be0ee87ddf1fcc6fd9e306ae47ab9c54b693d8f2c" id=c6642047-883b-4acc-8aee-1bd6de796b2b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 07 23:37:00 embed-certs-654118 crio[570]: time="2025-12-07T23:37:00.115349724Z" level=info msg="Removed container e9239524be180388617e185be0ee87ddf1fcc6fd9e306ae47ab9c54b693d8f2c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p/dashboard-metrics-scraper" id=c6642047-883b-4acc-8aee-1bd6de796b2b name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 07 23:37:01 embed-certs-654118 crio[570]: time="2025-12-07T23:37:01.109266184Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=421244e0-aa0b-420d-92d9-5d8e2be81334 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:37:01 embed-certs-654118 crio[570]: time="2025-12-07T23:37:01.110262336Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=562f11b8-6659-4ebb-ae75-2e7be4899127 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:37:01 embed-certs-654118 crio[570]: time="2025-12-07T23:37:01.111649962Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a0a04da3-4b11-41a8-93e4-ec41a03ea548 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:37:01 embed-certs-654118 crio[570]: time="2025-12-07T23:37:01.111759387Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:01 embed-certs-654118 crio[570]: time="2025-12-07T23:37:01.116797447Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:01 embed-certs-654118 crio[570]: time="2025-12-07T23:37:01.1169782Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b42eac09ef6bc6ccd7ea8acb48090d220e35ffa106b8cf78e81b08cc564cf2f0/merged/etc/passwd: no such file or directory"
	Dec 07 23:37:01 embed-certs-654118 crio[570]: time="2025-12-07T23:37:01.117006969Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b42eac09ef6bc6ccd7ea8acb48090d220e35ffa106b8cf78e81b08cc564cf2f0/merged/etc/group: no such file or directory"
	Dec 07 23:37:01 embed-certs-654118 crio[570]: time="2025-12-07T23:37:01.11727408Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:01 embed-certs-654118 crio[570]: time="2025-12-07T23:37:01.148407032Z" level=info msg="Created container a230f8e09c8a793d24bc930a0fb7c9e8f555725f765382beb79ac8621a4e3455: kube-system/storage-provisioner/storage-provisioner" id=a0a04da3-4b11-41a8-93e4-ec41a03ea548 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:37:01 embed-certs-654118 crio[570]: time="2025-12-07T23:37:01.149175543Z" level=info msg="Starting container: a230f8e09c8a793d24bc930a0fb7c9e8f555725f765382beb79ac8621a4e3455" id=fee8b1fd-66b5-402e-9568-e73d843bb268 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:37:01 embed-certs-654118 crio[570]: time="2025-12-07T23:37:01.151125322Z" level=info msg="Started container" PID=1768 containerID=a230f8e09c8a793d24bc930a0fb7c9e8f555725f765382beb79ac8621a4e3455 description=kube-system/storage-provisioner/storage-provisioner id=fee8b1fd-66b5-402e-9568-e73d843bb268 name=/runtime.v1.RuntimeService/StartContainer sandboxID=184b49863aff7bb406732f03e8802327a73dc6bf00293d761e2bf93f05834919
	Dec 07 23:37:22 embed-certs-654118 crio[570]: time="2025-12-07T23:37:22.985317073Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=479742d7-a01f-4479-bfef-ebdccd5082f3 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:37:22 embed-certs-654118 crio[570]: time="2025-12-07T23:37:22.986931011Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8d9ec9a6-b86c-4f9e-8f76-0a5060d160c1 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:37:22 embed-certs-654118 crio[570]: time="2025-12-07T23:37:22.988778654Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p/dashboard-metrics-scraper" id=dbead2cc-92c2-499e-ad26-954fe1c7735b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:37:22 embed-certs-654118 crio[570]: time="2025-12-07T23:37:22.98917598Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:22 embed-certs-654118 crio[570]: time="2025-12-07T23:37:22.997120395Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:22 embed-certs-654118 crio[570]: time="2025-12-07T23:37:22.997899471Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:23 embed-certs-654118 crio[570]: time="2025-12-07T23:37:23.038403248Z" level=info msg="Created container 977e8fafdf74218cf51fae0fe63b18398a1e392fd9aca04d48a77e94825c5eb1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p/dashboard-metrics-scraper" id=dbead2cc-92c2-499e-ad26-954fe1c7735b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:37:23 embed-certs-654118 crio[570]: time="2025-12-07T23:37:23.039162864Z" level=info msg="Starting container: 977e8fafdf74218cf51fae0fe63b18398a1e392fd9aca04d48a77e94825c5eb1" id=792f272d-fa3b-4292-b2d4-0f2f3d03bbdb name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:37:23 embed-certs-654118 crio[570]: time="2025-12-07T23:37:23.041654616Z" level=info msg="Started container" PID=1806 containerID=977e8fafdf74218cf51fae0fe63b18398a1e392fd9aca04d48a77e94825c5eb1 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p/dashboard-metrics-scraper id=792f272d-fa3b-4292-b2d4-0f2f3d03bbdb name=/runtime.v1.RuntimeService/StartContainer sandboxID=576929cd3297cf2a4ffc1b4dc1da0f6e5fa38c66dc9f1bcdc87a647aafdad827
	Dec 07 23:37:23 embed-certs-654118 crio[570]: time="2025-12-07T23:37:23.176471408Z" level=info msg="Removing container: 875b7b94a37e52c746df5e05f215dfa5f1c92f794887cacf6865c3d4f41b062e" id=f4d4cb02-bf86-4649-a32d-3e9b2c87dd39 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 07 23:37:23 embed-certs-654118 crio[570]: time="2025-12-07T23:37:23.189438701Z" level=info msg="Removed container 875b7b94a37e52c746df5e05f215dfa5f1c92f794887cacf6865c3d4f41b062e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p/dashboard-metrics-scraper" id=f4d4cb02-bf86-4649-a32d-3e9b2c87dd39 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	977e8fafdf742       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago        Exited              dashboard-metrics-scraper   3                   576929cd3297c       dashboard-metrics-scraper-6ffb444bf9-s2g7p   kubernetes-dashboard
	a230f8e09c8a7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           29 seconds ago       Running             storage-provisioner         1                   184b49863aff7       storage-provisioner                          kube-system
	fbf4535fa2929       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   52 seconds ago       Running             kubernetes-dashboard        0                   d6a1266848dba       kubernetes-dashboard-855c9754f9-8dl4x        kubernetes-dashboard
	a6c98c6dc2249       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           About a minute ago   Running             coredns                     0                   399b2d963739d       coredns-66bc5c9577-wvgqf                     kube-system
	0f1dc0c7f1b35       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           About a minute ago   Running             busybox                     1                   78bdd627934b3       busybox                                      default
	fa59387c3b4d4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           About a minute ago   Exited              storage-provisioner         0                   184b49863aff7       storage-provisioner                          kube-system
	64270ee075317       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           About a minute ago   Running             kindnet-cni                 0                   b11d7bd3b9609       kindnet-68q87                                kube-system
	9e595ec0ec0a2       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           About a minute ago   Running             kube-proxy                  0                   8de12bd876fcf       kube-proxy-l75b2                             kube-system
	55f614a7d8907       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           About a minute ago   Running             etcd                        0                   be9bd961329a8       etcd-embed-certs-654118                      kube-system
	de2a8fefd0407       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           About a minute ago   Running             kube-apiserver              0                   acf48a297b1a1       kube-apiserver-embed-certs-654118            kube-system
	63dcc5abcffa7       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           About a minute ago   Running             kube-scheduler              0                   b89c4f989e484       kube-scheduler-embed-certs-654118            kube-system
	1c04ccfa6ad08       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           About a minute ago   Running             kube-controller-manager     0                   58c086861a477       kube-controller-manager-embed-certs-654118   kube-system
	
	
	==> coredns [a6c98c6dc2249ec043cc985ad99b2be276e7fb077b56a646b774572f9b0e43e9] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55677 - 27609 "HINFO IN 7821679087082351473.2883090864246873011. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021622619s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-654118
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-654118
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=embed-certs-654118
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_34_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:34:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-654118
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:37:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:37:20 +0000   Sun, 07 Dec 2025 23:34:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:37:20 +0000   Sun, 07 Dec 2025 23:34:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:37:20 +0000   Sun, 07 Dec 2025 23:34:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:37:20 +0000   Sun, 07 Dec 2025 23:35:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-654118
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                03c8ca8e-58f6-4b1a-acac-362ecdda585b
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 coredns-66bc5c9577-wvgqf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m27s
	  kube-system                 etcd-embed-certs-654118                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m33s
	  kube-system                 kindnet-68q87                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m27s
	  kube-system                 kube-apiserver-embed-certs-654118             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 kube-controller-manager-embed-certs-654118    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 kube-proxy-l75b2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-scheduler-embed-certs-654118             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-s2g7p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8dl4x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m25s                  kube-proxy       
	  Normal  Starting                 60s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m37s (x8 over 2m37s)  kubelet          Node embed-certs-654118 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m37s (x8 over 2m37s)  kubelet          Node embed-certs-654118 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m37s (x8 over 2m37s)  kubelet          Node embed-certs-654118 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m33s                  kubelet          Node embed-certs-654118 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m33s                  kubelet          Node embed-certs-654118 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m33s                  kubelet          Node embed-certs-654118 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m33s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m28s                  node-controller  Node embed-certs-654118 event: Registered Node embed-certs-654118 in Controller
	  Normal  NodeReady                106s                   kubelet          Node embed-certs-654118 status is now: NodeReady
	  Normal  Starting                 65s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  64s (x8 over 65s)      kubelet          Node embed-certs-654118 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x8 over 65s)      kubelet          Node embed-certs-654118 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x8 over 65s)      kubelet          Node embed-certs-654118 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           59s                    node-controller  Node embed-certs-654118 event: Registered Node embed-certs-654118 in Controller
	
	
	==> dmesg <==
	[  +0.006319] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.495443] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006323] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494714] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006745] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494455] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007157] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493953] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007413] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493695] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007143] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493798] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007702] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493076] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008458] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493060] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008891] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492811] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007996] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493243] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008588] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492559] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008931] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.491699] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.010378] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	
	
	==> etcd [55f614a7d89079ce6b0150051faf8399dea9fe3ee0db5301b1f6eb9811f274fb] <==
	{"level":"warn","ts":"2025-12-07T23:36:28.680963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.688222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.696058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.703415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.710892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.718034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.729497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.736446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.743293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.750162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.757381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.764194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.771871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.787348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.795810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.805257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.813631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.834452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.841539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.849819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:28.905675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:36:56.262457Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.676561ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790530665223496 > lease_revoke:<id:40899afb2c7a94b0>","response":"size:28"}
	{"level":"info","ts":"2025-12-07T23:36:56.262578Z","caller":"traceutil/trace.go:172","msg":"trace[891339229] linearizableReadLoop","detail":"{readStateIndex:694; appliedIndex:693; }","duration":"126.822942ms","start":"2025-12-07T23:36:56.135740Z","end":"2025-12-07T23:36:56.262563Z","steps":["trace[891339229] 'read index received'  (duration: 39.193µs)","trace[891339229] 'applied index is now lower than readState.Index'  (duration: 126.782684ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-07T23:36:56.262708Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.964796ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-654118\" limit:1 ","response":"range_response_count:1 size:5709"}
	{"level":"info","ts":"2025-12-07T23:36:56.262735Z","caller":"traceutil/trace.go:172","msg":"trace[76058325] range","detail":"{range_begin:/registry/minions/embed-certs-654118; range_end:; response_count:1; response_revision:653; }","duration":"127.001291ms","start":"2025-12-07T23:36:56.135725Z","end":"2025-12-07T23:36:56.262727Z","steps":["trace[76058325] 'agreement among raft nodes before linearized reading'  (duration: 126.876239ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:37:31 up  2:19,  0 user,  load average: 3.50, 2.80, 2.08
	Linux embed-certs-654118 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [64270ee075317594cd8574f52acb74ad205fd052a7c4a7a070e7c82ad1a83c22] <==
	I1207 23:36:30.623718       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1207 23:36:30.625641       1 main.go:148] setting mtu 1500 for CNI 
	I1207 23:36:30.625699       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 23:36:30.625770       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T23:36:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 23:36:30.829172       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 23:36:30.891156       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 23:36:30.891187       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 23:36:30.891346       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 23:36:31.119792       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 23:36:31.119848       1 metrics.go:72] Registering metrics
	I1207 23:36:31.120027       1 controller.go:711] "Syncing nftables rules"
	I1207 23:36:40.828946       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1207 23:36:40.829008       1 main.go:301] handling current node
	I1207 23:36:50.834444       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1207 23:36:50.834474       1 main.go:301] handling current node
	I1207 23:37:00.829263       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1207 23:37:00.829301       1 main.go:301] handling current node
	I1207 23:37:10.830407       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1207 23:37:10.830462       1 main.go:301] handling current node
	I1207 23:37:20.829203       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1207 23:37:20.829249       1 main.go:301] handling current node
	I1207 23:37:30.834875       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1207 23:37:30.834924       1 main.go:301] handling current node
	
	
	==> kube-apiserver [de2a8fefd04073ed27eff698be1e31a40e77a0d4e91f60687ad522521cb5f30a] <==
	I1207 23:36:29.456955       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1207 23:36:29.460850       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1207 23:36:29.460975       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1207 23:36:29.461375       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1207 23:36:29.461531       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1207 23:36:29.461598       1 aggregator.go:171] initial CRD sync complete...
	I1207 23:36:29.461610       1 autoregister_controller.go:144] Starting autoregister controller
	I1207 23:36:29.461618       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1207 23:36:29.461625       1 cache.go:39] Caches are synced for autoregister controller
	E1207 23:36:29.475602       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1207 23:36:29.481069       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1207 23:36:29.505454       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 23:36:29.529462       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1207 23:36:29.780486       1 controller.go:667] quota admission added evaluator for: namespaces
	I1207 23:36:29.814771       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 23:36:29.841798       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 23:36:29.852119       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 23:36:29.860135       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 23:36:29.898472       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.27.191"}
	I1207 23:36:29.915124       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.248.19"}
	I1207 23:36:30.358039       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1207 23:36:32.840057       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 23:36:33.287744       1 controller.go:667] quota admission added evaluator for: endpoints
	I1207 23:36:33.437136       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:36:33.437146       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [1c04ccfa6ad08a37efa73abd2f81a78cc8ab1e12cae0f419d99b512bde0a19c0] <==
	I1207 23:36:32.803817       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1207 23:36:32.803821       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1207 23:36:32.804748       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1207 23:36:32.808143       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1207 23:36:32.809355       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1207 23:36:32.811598       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1207 23:36:32.815938       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1207 23:36:32.816112       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1207 23:36:32.816195       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-654118"
	I1207 23:36:32.816259       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1207 23:36:32.834475       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1207 23:36:32.834528       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1207 23:36:32.834539       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1207 23:36:32.834543       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1207 23:36:32.834549       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1207 23:36:32.834556       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1207 23:36:32.834574       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1207 23:36:32.834664       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1207 23:36:32.834735       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1207 23:36:32.834682       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1207 23:36:32.834955       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1207 23:36:32.837253       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1207 23:36:32.839605       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1207 23:36:32.839646       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 23:36:32.865106       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9e595ec0ec0a2a4f455100334da2b7bc91d7b90dbc422aa9f96b4bfcbd14e784] <==
	I1207 23:36:30.466631       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:36:30.533978       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 23:36:30.635080       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 23:36:30.635118       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1207 23:36:30.635848       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:36:30.668546       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:36:30.668611       1 server_linux.go:132] "Using iptables Proxier"
	I1207 23:36:30.676060       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:36:30.678468       1 server.go:527] "Version info" version="v1.34.2"
	I1207 23:36:30.678515       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:36:30.680250       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:36:30.680273       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:36:30.680308       1 config.go:200] "Starting service config controller"
	I1207 23:36:30.680315       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:36:30.680395       1 config.go:309] "Starting node config controller"
	I1207 23:36:30.680407       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:36:30.681486       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:36:30.681553       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:36:30.781397       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1207 23:36:30.781479       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 23:36:30.781615       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:36:30.781779       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [63dcc5abcffa72045b4ce0dfe82b7bff6403005be06354ce602e9140d0e7be08] <==
	I1207 23:36:28.069720       1 serving.go:386] Generated self-signed cert in-memory
	W1207 23:36:29.409700       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 23:36:29.409748       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 23:36:29.409770       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 23:36:29.409779       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 23:36:29.447793       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1207 23:36:29.447825       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:36:29.450824       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:36:29.450881       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:36:29.451496       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 23:36:29.451579       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 23:36:29.552086       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 07 23:36:33 embed-certs-654118 kubelet[737]: I1207 23:36:33.543427     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e62df48b-0039-460c-a6cc-935084c26cf3-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-s2g7p\" (UID: \"e62df48b-0039-460c-a6cc-935084c26cf3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p"
	Dec 07 23:36:40 embed-certs-654118 kubelet[737]: I1207 23:36:40.460700     737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8dl4x" podStartSLOduration=2.725759245 podStartE2EDuration="7.460657233s" podCreationTimestamp="2025-12-07 23:36:33 +0000 UTC" firstStartedPulling="2025-12-07 23:36:33.738651039 +0000 UTC m=+6.844756216" lastFinishedPulling="2025-12-07 23:36:38.473549016 +0000 UTC m=+11.579654204" observedRunningTime="2025-12-07 23:36:39.053166273 +0000 UTC m=+12.159271469" watchObservedRunningTime="2025-12-07 23:36:40.460657233 +0000 UTC m=+13.566762426"
	Dec 07 23:36:42 embed-certs-654118 kubelet[737]: I1207 23:36:42.048445     737 scope.go:117] "RemoveContainer" containerID="a8def650128b6b0deb078ecef07e4892c67193bc5598fc8adf125c8bbec80e14"
	Dec 07 23:36:43 embed-certs-654118 kubelet[737]: I1207 23:36:43.053603     737 scope.go:117] "RemoveContainer" containerID="a8def650128b6b0deb078ecef07e4892c67193bc5598fc8adf125c8bbec80e14"
	Dec 07 23:36:43 embed-certs-654118 kubelet[737]: I1207 23:36:43.053930     737 scope.go:117] "RemoveContainer" containerID="e9239524be180388617e185be0ee87ddf1fcc6fd9e306ae47ab9c54b693d8f2c"
	Dec 07 23:36:43 embed-certs-654118 kubelet[737]: E1207 23:36:43.054149     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s2g7p_kubernetes-dashboard(e62df48b-0039-460c-a6cc-935084c26cf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p" podUID="e62df48b-0039-460c-a6cc-935084c26cf3"
	Dec 07 23:36:44 embed-certs-654118 kubelet[737]: I1207 23:36:44.058798     737 scope.go:117] "RemoveContainer" containerID="e9239524be180388617e185be0ee87ddf1fcc6fd9e306ae47ab9c54b693d8f2c"
	Dec 07 23:36:44 embed-certs-654118 kubelet[737]: E1207 23:36:44.058996     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s2g7p_kubernetes-dashboard(e62df48b-0039-460c-a6cc-935084c26cf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p" podUID="e62df48b-0039-460c-a6cc-935084c26cf3"
	Dec 07 23:36:47 embed-certs-654118 kubelet[737]: I1207 23:36:47.649780     737 scope.go:117] "RemoveContainer" containerID="e9239524be180388617e185be0ee87ddf1fcc6fd9e306ae47ab9c54b693d8f2c"
	Dec 07 23:36:47 embed-certs-654118 kubelet[737]: E1207 23:36:47.650063     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s2g7p_kubernetes-dashboard(e62df48b-0039-460c-a6cc-935084c26cf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p" podUID="e62df48b-0039-460c-a6cc-935084c26cf3"
	Dec 07 23:36:59 embed-certs-654118 kubelet[737]: I1207 23:36:59.985504     737 scope.go:117] "RemoveContainer" containerID="e9239524be180388617e185be0ee87ddf1fcc6fd9e306ae47ab9c54b693d8f2c"
	Dec 07 23:37:00 embed-certs-654118 kubelet[737]: I1207 23:37:00.102706     737 scope.go:117] "RemoveContainer" containerID="e9239524be180388617e185be0ee87ddf1fcc6fd9e306ae47ab9c54b693d8f2c"
	Dec 07 23:37:00 embed-certs-654118 kubelet[737]: I1207 23:37:00.102986     737 scope.go:117] "RemoveContainer" containerID="875b7b94a37e52c746df5e05f215dfa5f1c92f794887cacf6865c3d4f41b062e"
	Dec 07 23:37:00 embed-certs-654118 kubelet[737]: E1207 23:37:00.103198     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s2g7p_kubernetes-dashboard(e62df48b-0039-460c-a6cc-935084c26cf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p" podUID="e62df48b-0039-460c-a6cc-935084c26cf3"
	Dec 07 23:37:01 embed-certs-654118 kubelet[737]: I1207 23:37:01.108881     737 scope.go:117] "RemoveContainer" containerID="fa59387c3b4d4bfd483cee16a4f633f23a1c3789f8c37f1fa4f4d2b9c9a3ed6a"
	Dec 07 23:37:07 embed-certs-654118 kubelet[737]: I1207 23:37:07.649918     737 scope.go:117] "RemoveContainer" containerID="875b7b94a37e52c746df5e05f215dfa5f1c92f794887cacf6865c3d4f41b062e"
	Dec 07 23:37:07 embed-certs-654118 kubelet[737]: E1207 23:37:07.650159     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s2g7p_kubernetes-dashboard(e62df48b-0039-460c-a6cc-935084c26cf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p" podUID="e62df48b-0039-460c-a6cc-935084c26cf3"
	Dec 07 23:37:22 embed-certs-654118 kubelet[737]: I1207 23:37:22.984801     737 scope.go:117] "RemoveContainer" containerID="875b7b94a37e52c746df5e05f215dfa5f1c92f794887cacf6865c3d4f41b062e"
	Dec 07 23:37:23 embed-certs-654118 kubelet[737]: I1207 23:37:23.173651     737 scope.go:117] "RemoveContainer" containerID="875b7b94a37e52c746df5e05f215dfa5f1c92f794887cacf6865c3d4f41b062e"
	Dec 07 23:37:23 embed-certs-654118 kubelet[737]: I1207 23:37:23.173903     737 scope.go:117] "RemoveContainer" containerID="977e8fafdf74218cf51fae0fe63b18398a1e392fd9aca04d48a77e94825c5eb1"
	Dec 07 23:37:23 embed-certs-654118 kubelet[737]: E1207 23:37:23.174101     737 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s2g7p_kubernetes-dashboard(e62df48b-0039-460c-a6cc-935084c26cf3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s2g7p" podUID="e62df48b-0039-460c-a6cc-935084c26cf3"
	Dec 07 23:37:25 embed-certs-654118 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 07 23:37:25 embed-certs-654118 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 07 23:37:25 embed-certs-654118 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 07 23:37:25 embed-certs-654118 systemd[1]: kubelet.service: Consumed 1.967s CPU time.
	
	
	==> kubernetes-dashboard [fbf4535fa292992611e22cc68e13a796e2e4470d6418b306a556048000c2c4a4] <==
	2025/12/07 23:36:38 Using namespace: kubernetes-dashboard
	2025/12/07 23:36:38 Using in-cluster config to connect to apiserver
	2025/12/07 23:36:38 Using secret token for csrf signing
	2025/12/07 23:36:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/07 23:36:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/07 23:36:38 Successful initial request to the apiserver, version: v1.34.2
	2025/12/07 23:36:38 Generating JWE encryption key
	2025/12/07 23:36:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/07 23:36:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/07 23:36:39 Initializing JWE encryption key from synchronized object
	2025/12/07 23:36:39 Creating in-cluster Sidecar client
	2025/12/07 23:36:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/07 23:36:39 Serving insecurely on HTTP port: 9090
	2025/12/07 23:37:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/07 23:36:38 Starting overwatch
	
	
	==> storage-provisioner [a230f8e09c8a793d24bc930a0fb7c9e8f555725f765382beb79ac8621a4e3455] <==
	W1207 23:37:01.175121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:04.631190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:08.895190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:12.497520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:15.551488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:18.573920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:18.578514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 23:37:18.578674       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 23:37:18.578788       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69a0c6a4-6b58-458f-b7fc-bc544f9a2bed", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-654118_eda95e15-9331-4d69-961f-ac0635ce5997 became leader
	I1207 23:37:18.578823       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-654118_eda95e15-9331-4d69-961f-ac0635ce5997!
	W1207 23:37:18.581405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:18.584718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 23:37:18.679612       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-654118_eda95e15-9331-4d69-961f-ac0635ce5997!
	W1207 23:37:20.588454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:20.592594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:22.597618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:22.602457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:24.606433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:24.669381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:26.673083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:26.677014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:28.680559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:28.686803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:30.690129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:30.694090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fa59387c3b4d4bfd483cee16a4f633f23a1c3789f8c37f1fa4f4d2b9c9a3ed6a] <==
	I1207 23:36:30.417191       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1207 23:37:00.421942       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-654118 -n embed-certs-654118
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-654118 -n embed-certs-654118: exit status 2 (346.046239ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-654118 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-312944 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-312944 --alsologtostderr -v=1: exit status 80 (2.101559843s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-312944 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:38:04.843566  702904 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:38:04.843862  702904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:38:04.843867  702904 out.go:374] Setting ErrFile to fd 2...
	I1207 23:38:04.843875  702904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:38:04.844237  702904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:38:04.844564  702904 out.go:368] Setting JSON to false
	I1207 23:38:04.844581  702904 mustload.go:66] Loading cluster: default-k8s-diff-port-312944
	I1207 23:38:04.845427  702904 config.go:182] Loaded profile config "default-k8s-diff-port-312944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:38:04.845906  702904 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-312944 --format={{.State.Status}}
	I1207 23:38:04.876269  702904 host.go:66] Checking if "default-k8s-diff-port-312944" exists ...
	I1207 23:38:04.876648  702904 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:38:04.969743  702904 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:82 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-07 23:38:04.955361122 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:38:04.970663  702904 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-312944 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1207 23:38:04.973469  702904 out.go:179] * Pausing node default-k8s-diff-port-312944 ... 
	I1207 23:38:04.974796  702904 host.go:66] Checking if "default-k8s-diff-port-312944" exists ...
	I1207 23:38:04.975173  702904 ssh_runner.go:195] Run: systemctl --version
	I1207 23:38:04.975235  702904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-312944
	I1207 23:38:05.004964  702904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/default-k8s-diff-port-312944/id_rsa Username:docker}
	I1207 23:38:05.120627  702904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:38:05.140997  702904 pause.go:52] kubelet running: true
	I1207 23:38:05.141122  702904 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1207 23:38:05.379582  702904 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1207 23:38:05.379674  702904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1207 23:38:05.461617  702904 cri.go:89] found id: "058865ddda268775bdf21f4e133779ac38c262c9ded903bf758c68c656ba4b37"
	I1207 23:38:05.461647  702904 cri.go:89] found id: "8eb4661f40adb7e3bc509b1d373b2ad35becf93ce0d8b257ae68088048cea1a3"
	I1207 23:38:05.461654  702904 cri.go:89] found id: "ae571d49269c915740fb2cf23f9df93b135ad116f7f7e358c4a59ecfac859a14"
	I1207 23:38:05.461659  702904 cri.go:89] found id: "1141bc53141e8e773858f382cacf8f035e2c792f49fad9bc151a5de36582d819"
	I1207 23:38:05.461664  702904 cri.go:89] found id: "03d7391848685b4e4adc0e0cbeb5a8f00b9ca0ce5cf2a95d3e89a3e413264d20"
	I1207 23:38:05.461670  702904 cri.go:89] found id: "362b83f015210f03925637b1b0598b825d674607d060c054cf459ff6794854a5"
	I1207 23:38:05.461674  702904 cri.go:89] found id: "fa639c7294ee1af933ce6c68db15470c1c2d5d2c404c5e0568eaac61e7ede373"
	I1207 23:38:05.461679  702904 cri.go:89] found id: "b04410a9187c7167576fa7f9cb5bf5a761981c61b37ea3b68eb353c721baab8f"
	I1207 23:38:05.461683  702904 cri.go:89] found id: "f27c08f4d2ee8d8898a367bb16db44c1f22130d15e95d71881aa776e8567269c"
	I1207 23:38:05.461697  702904 cri.go:89] found id: "97a5b2897354b4d5337d92f0bb24a680df6f27de664ccfb0f4e72604947f4e42"
	I1207 23:38:05.461706  702904 cri.go:89] found id: "d0dece358b07ad46edbe28384e450be226ec46d5ce2446c6c96076c671ea49ad"
	I1207 23:38:05.461711  702904 cri.go:89] found id: ""
	I1207 23:38:05.461759  702904 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 23:38:05.480576  702904 retry.go:31] will retry after 352.855682ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:38:05Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:38:05.834228  702904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:38:05.848144  702904 pause.go:52] kubelet running: false
	I1207 23:38:05.848212  702904 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1207 23:38:05.995857  702904 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1207 23:38:05.995966  702904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1207 23:38:06.077587  702904 cri.go:89] found id: "058865ddda268775bdf21f4e133779ac38c262c9ded903bf758c68c656ba4b37"
	I1207 23:38:06.077611  702904 cri.go:89] found id: "8eb4661f40adb7e3bc509b1d373b2ad35becf93ce0d8b257ae68088048cea1a3"
	I1207 23:38:06.077615  702904 cri.go:89] found id: "ae571d49269c915740fb2cf23f9df93b135ad116f7f7e358c4a59ecfac859a14"
	I1207 23:38:06.077619  702904 cri.go:89] found id: "1141bc53141e8e773858f382cacf8f035e2c792f49fad9bc151a5de36582d819"
	I1207 23:38:06.077622  702904 cri.go:89] found id: "03d7391848685b4e4adc0e0cbeb5a8f00b9ca0ce5cf2a95d3e89a3e413264d20"
	I1207 23:38:06.077626  702904 cri.go:89] found id: "362b83f015210f03925637b1b0598b825d674607d060c054cf459ff6794854a5"
	I1207 23:38:06.077629  702904 cri.go:89] found id: "fa639c7294ee1af933ce6c68db15470c1c2d5d2c404c5e0568eaac61e7ede373"
	I1207 23:38:06.077632  702904 cri.go:89] found id: "b04410a9187c7167576fa7f9cb5bf5a761981c61b37ea3b68eb353c721baab8f"
	I1207 23:38:06.077634  702904 cri.go:89] found id: "f27c08f4d2ee8d8898a367bb16db44c1f22130d15e95d71881aa776e8567269c"
	I1207 23:38:06.077640  702904 cri.go:89] found id: "97a5b2897354b4d5337d92f0bb24a680df6f27de664ccfb0f4e72604947f4e42"
	I1207 23:38:06.077643  702904 cri.go:89] found id: "d0dece358b07ad46edbe28384e450be226ec46d5ce2446c6c96076c671ea49ad"
	I1207 23:38:06.077646  702904 cri.go:89] found id: ""
	I1207 23:38:06.077692  702904 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 23:38:06.090794  702904 retry.go:31] will retry after 471.099023ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:38:06Z" level=error msg="open /run/runc: no such file or directory"
	I1207 23:38:06.562520  702904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:38:06.576575  702904 pause.go:52] kubelet running: false
	I1207 23:38:06.576640  702904 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1207 23:38:06.728229  702904 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1207 23:38:06.728357  702904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1207 23:38:06.812860  702904 cri.go:89] found id: "058865ddda268775bdf21f4e133779ac38c262c9ded903bf758c68c656ba4b37"
	I1207 23:38:06.812883  702904 cri.go:89] found id: "8eb4661f40adb7e3bc509b1d373b2ad35becf93ce0d8b257ae68088048cea1a3"
	I1207 23:38:06.812984  702904 cri.go:89] found id: "ae571d49269c915740fb2cf23f9df93b135ad116f7f7e358c4a59ecfac859a14"
	I1207 23:38:06.812990  702904 cri.go:89] found id: "1141bc53141e8e773858f382cacf8f035e2c792f49fad9bc151a5de36582d819"
	I1207 23:38:06.812996  702904 cri.go:89] found id: "03d7391848685b4e4adc0e0cbeb5a8f00b9ca0ce5cf2a95d3e89a3e413264d20"
	I1207 23:38:06.813005  702904 cri.go:89] found id: "362b83f015210f03925637b1b0598b825d674607d060c054cf459ff6794854a5"
	I1207 23:38:06.813009  702904 cri.go:89] found id: "fa639c7294ee1af933ce6c68db15470c1c2d5d2c404c5e0568eaac61e7ede373"
	I1207 23:38:06.813015  702904 cri.go:89] found id: "b04410a9187c7167576fa7f9cb5bf5a761981c61b37ea3b68eb353c721baab8f"
	I1207 23:38:06.813036  702904 cri.go:89] found id: "f27c08f4d2ee8d8898a367bb16db44c1f22130d15e95d71881aa776e8567269c"
	I1207 23:38:06.813057  702904 cri.go:89] found id: "97a5b2897354b4d5337d92f0bb24a680df6f27de664ccfb0f4e72604947f4e42"
	I1207 23:38:06.813062  702904 cri.go:89] found id: "d0dece358b07ad46edbe28384e450be226ec46d5ce2446c6c96076c671ea49ad"
	I1207 23:38:06.813066  702904 cri.go:89] found id: ""
	I1207 23:38:06.813123  702904 ssh_runner.go:195] Run: sudo runc list -f json
	I1207 23:38:06.834420  702904 out.go:203] 
	W1207 23:38:06.837573  702904 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:38:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:38:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1207 23:38:06.837609  702904 out.go:285] * 
	* 
	W1207 23:38:06.845098  702904 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 23:38:06.846957  702904 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-312944 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-312944
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-312944:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "df4662170d3c8e92c5a6bf9174e1eb910dbfeaa1b35d09c598d8401172890e61",
	        "Created": "2025-12-07T23:35:53.17207692Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 687513,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:37:04.966230146Z",
	            "FinishedAt": "2025-12-07T23:37:04.007935147Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/df4662170d3c8e92c5a6bf9174e1eb910dbfeaa1b35d09c598d8401172890e61/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/df4662170d3c8e92c5a6bf9174e1eb910dbfeaa1b35d09c598d8401172890e61/hostname",
	        "HostsPath": "/var/lib/docker/containers/df4662170d3c8e92c5a6bf9174e1eb910dbfeaa1b35d09c598d8401172890e61/hosts",
	        "LogPath": "/var/lib/docker/containers/df4662170d3c8e92c5a6bf9174e1eb910dbfeaa1b35d09c598d8401172890e61/df4662170d3c8e92c5a6bf9174e1eb910dbfeaa1b35d09c598d8401172890e61-json.log",
	        "Name": "/default-k8s-diff-port-312944",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-312944:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-312944",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "df4662170d3c8e92c5a6bf9174e1eb910dbfeaa1b35d09c598d8401172890e61",
	                "LowerDir": "/var/lib/docker/overlay2/0118ae1fd177a027d3c4130ba6cb419228d15d23a753279249b22be530579070-init/diff:/var/lib/docker/overlay2/d2e9c5481c0f5ed3745e4b3c85b207e8e3f273f5a1d285f7bc7bfa20976ad16e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0118ae1fd177a027d3c4130ba6cb419228d15d23a753279249b22be530579070/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0118ae1fd177a027d3c4130ba6cb419228d15d23a753279249b22be530579070/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0118ae1fd177a027d3c4130ba6cb419228d15d23a753279249b22be530579070/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-312944",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-312944/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-312944",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-312944",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-312944",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5f942a550c56dd9081abe1d3b1e36641c4925906b3582795c4fda0bbe2174dd8",
	            "SandboxKey": "/var/run/docker/netns/5f942a550c56",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33483"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33484"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33487"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33485"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33486"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-312944": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "217dc275cbc6467e058b35e68e0b1d3b5b2cb07cc2e90f33cf455ec5c147cec4",
	                    "EndpointID": "532627a0168cf10b204310218998e053bf627273757d970f30a2d61e2fa8843a",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "36:52:73:3c:63:a5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-312944",
	                        "df4662170d3c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-312944 -n default-k8s-diff-port-312944
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-312944 -n default-k8s-diff-port-312944: exit status 2 (413.813755ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-312944 logs -n 25
I1207 23:38:07.344079  393125 config.go:182] Loaded profile config "kindnet-600852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-312944 logs -n 25: (1.583237115s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-600852 sudo systemctl cat docker --no-pager                                                                                                                │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo cat /etc/docker/daemon.json                                                                                                                    │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │                     │
	│ ssh     │ -p auto-600852 sudo docker system info                                                                                                                             │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │                     │
	│ ssh     │ -p auto-600852 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │                     │
	│ ssh     │ -p auto-600852 sudo systemctl cat cri-docker --no-pager                                                                                                            │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │                     │
	│ ssh     │ -p auto-600852 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo cri-dockerd --version                                                                                                                          │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │                     │
	│ ssh     │ -p auto-600852 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo containerd config dump                                                                                                                         │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ delete  │ -p embed-certs-654118                                                                                                                                              │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo crio config                                                                                                                                    │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ delete  │ -p auto-600852                                                                                                                                                     │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ delete  │ -p embed-certs-654118                                                                                                                                              │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ start   │ -p calico-600852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                             │ calico-600852                │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │                     │
	│ start   │ -p custom-flannel-600852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-600852        │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │                     │
	│ image   │ default-k8s-diff-port-312944 image list --format=json                                                                                                              │ default-k8s-diff-port-312944 │ jenkins │ v1.37.0 │ 07 Dec 25 23:38 UTC │ 07 Dec 25 23:38 UTC │
	│ pause   │ -p default-k8s-diff-port-312944 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-312944 │ jenkins │ v1.37.0 │ 07 Dec 25 23:38 UTC │                     │
	│ ssh     │ -p kindnet-600852 pgrep -a kubelet                                                                                                                                 │ kindnet-600852               │ jenkins │ v1.37.0 │ 07 Dec 25 23:38 UTC │ 07 Dec 25 23:38 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:37:35
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:37:35.462168  697240 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:37:35.462277  697240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:37:35.462289  697240 out.go:374] Setting ErrFile to fd 2...
	I1207 23:37:35.462294  697240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:37:35.462540  697240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:37:35.463172  697240 out.go:368] Setting JSON to false
	I1207 23:37:35.464794  697240 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8399,"bootTime":1765142256,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:37:35.464880  697240 start.go:143] virtualization: kvm guest
	I1207 23:37:35.466843  697240 out.go:179] * [custom-flannel-600852] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:37:35.468251  697240 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:37:35.468287  697240 notify.go:221] Checking for updates...
	I1207 23:37:35.471267  697240 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:37:35.472792  697240 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:37:35.473878  697240 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:37:35.475283  697240 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:37:35.476465  697240 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:37:35.400195  697202 config.go:182] Loaded profile config "default-k8s-diff-port-312944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:37:35.400353  697202 config.go:182] Loaded profile config "kindnet-600852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:37:35.400514  697202 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:37:35.429288  697202 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:37:35.429477  697202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:37:35.494816  697202 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-07 23:37:35.48406654 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:37:35.494929  697202 docker.go:319] overlay module found
	I1207 23:37:35.497562  697202 out.go:179] * Using the docker driver based on user configuration
	I1207 23:37:35.478098  697240 config.go:182] Loaded profile config "default-k8s-diff-port-312944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:37:35.478226  697240 config.go:182] Loaded profile config "kindnet-600852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:37:35.478393  697240 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:37:35.505909  697240 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:37:35.506077  697240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:37:35.571510  697240 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-07 23:37:35.560842683 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:37:35.571614  697240 docker.go:319] overlay module found
	I1207 23:37:35.498843  697202 start.go:309] selected driver: docker
	I1207 23:37:35.498868  697202 start.go:927] validating driver "docker" against <nil>
	I1207 23:37:35.498886  697202 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:37:35.499584  697202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:37:35.571218  697202 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-07 23:37:35.560842683 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:37:35.571389  697202 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1207 23:37:35.571712  697202 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:37:35.573371  697202 out.go:179] * Using Docker driver with root privileges
	I1207 23:37:35.573376  697240 out.go:179] * Using the docker driver based on user configuration
	I1207 23:37:35.574608  697202 cni.go:84] Creating CNI manager for "calico"
	I1207 23:37:35.574627  697202 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1207 23:37:35.574707  697202 start.go:353] cluster config:
	{Name:calico-600852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:37:35.576216  697202 out.go:179] * Starting "calico-600852" primary control-plane node in "calico-600852" cluster
	I1207 23:37:35.577387  697202 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:37:35.578730  697202 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:37:35.579818  697202 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:37:35.579894  697202 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1207 23:37:35.579910  697202 cache.go:65] Caching tarball of preloaded images
	I1207 23:37:35.579944  697202 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:37:35.580081  697202 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:37:35.580105  697202 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1207 23:37:35.580254  697202 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/config.json ...
	I1207 23:37:35.580287  697202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/config.json: {Name:mkc3ab2518e2ac158485368a4283678c9e1aa504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:35.606391  697202 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:37:35.606419  697202 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:37:35.606441  697202 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:37:35.606479  697202 start.go:360] acquireMachinesLock for calico-600852: {Name:mk63843d0e955c4ef490e3f22aabe305d776f228 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:37:35.606600  697202 start.go:364] duration metric: took 97.96µs to acquireMachinesLock for "calico-600852"
	I1207 23:37:35.606633  697202 start.go:93] Provisioning new machine with config: &{Name:calico-600852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-600852 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:37:35.606730  697202 start.go:125] createHost starting for "" (driver="docker")
	I1207 23:37:35.574610  697240 start.go:309] selected driver: docker
	I1207 23:37:35.574626  697240 start.go:927] validating driver "docker" against <nil>
	I1207 23:37:35.574640  697240 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:37:35.575351  697240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:37:35.637046  697240 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:67 SystemTime:2025-12-07 23:37:35.626148888 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:37:35.637269  697240 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1207 23:37:35.637584  697240 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:37:35.639544  697240 out.go:179] * Using Docker driver with root privileges
	I1207 23:37:35.640617  697240 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1207 23:37:35.640657  697240 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1207 23:37:35.640764  697240 start.go:353] cluster config:
	{Name:custom-flannel-600852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:37:35.642270  697240 out.go:179] * Starting "custom-flannel-600852" primary control-plane node in "custom-flannel-600852" cluster
	I1207 23:37:35.643886  697240 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:37:35.645224  697240 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:37:35.646387  697240 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:37:35.646419  697240 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1207 23:37:35.646438  697240 cache.go:65] Caching tarball of preloaded images
	I1207 23:37:35.646478  697240 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:37:35.646540  697240 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:37:35.646556  697240 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1207 23:37:35.646682  697240 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/config.json ...
	I1207 23:37:35.646712  697240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/config.json: {Name:mk800147fe034f5238922fec66d596f6aa169033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:35.671849  697240 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:37:35.671880  697240 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:37:35.671904  697240 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:37:35.671944  697240 start.go:360] acquireMachinesLock for custom-flannel-600852: {Name:mk15b40cec96074cdc3d9121b669340a772a5a19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:37:35.672061  697240 start.go:364] duration metric: took 93.067µs to acquireMachinesLock for "custom-flannel-600852"
	I1207 23:37:35.672086  697240 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-600852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-600852 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:37:35.672186  697240 start.go:125] createHost starting for "" (driver="docker")
	W1207 23:37:34.456566  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:36.457410  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:36.533401  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	W1207 23:37:39.033103  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	I1207 23:37:35.608827  697202 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1207 23:37:35.609102  697202 start.go:159] libmachine.API.Create for "calico-600852" (driver="docker")
	I1207 23:37:35.609146  697202 client.go:173] LocalClient.Create starting
	I1207 23:37:35.609246  697202 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem
	I1207 23:37:35.609297  697202 main.go:143] libmachine: Decoding PEM data...
	I1207 23:37:35.609323  697202 main.go:143] libmachine: Parsing certificate...
	I1207 23:37:35.609414  697202 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem
	I1207 23:37:35.609442  697202 main.go:143] libmachine: Decoding PEM data...
	I1207 23:37:35.609462  697202 main.go:143] libmachine: Parsing certificate...
	I1207 23:37:35.609968  697202 cli_runner.go:164] Run: docker network inspect calico-600852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1207 23:37:35.629047  697202 cli_runner.go:211] docker network inspect calico-600852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1207 23:37:35.629121  697202 network_create.go:284] running [docker network inspect calico-600852] to gather additional debugging logs...
	I1207 23:37:35.629144  697202 cli_runner.go:164] Run: docker network inspect calico-600852
	W1207 23:37:35.648368  697202 cli_runner.go:211] docker network inspect calico-600852 returned with exit code 1
	I1207 23:37:35.648412  697202 network_create.go:287] error running [docker network inspect calico-600852]: docker network inspect calico-600852: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-600852 not found
	I1207 23:37:35.648437  697202 network_create.go:289] output of [docker network inspect calico-600852]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-600852 not found
	
	** /stderr **
	I1207 23:37:35.648580  697202 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:37:35.668431  697202 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-918c8f4f6e86 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:f0:02:fe:94:4b} reservation:<nil>}
	I1207 23:37:35.669359  697202 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ce07fb07c16c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:d2:35:46:a2:0a} reservation:<nil>}
	I1207 23:37:35.669895  697202 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f198eadca31e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:79:39:d6:10:dc} reservation:<nil>}
	I1207 23:37:35.670453  697202 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-2feb264898ec IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:86:57:43:7d:13:a7} reservation:<nil>}
	I1207 23:37:35.671461  697202 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e3a4b0}
	I1207 23:37:35.671497  697202 network_create.go:124] attempt to create docker network calico-600852 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1207 23:37:35.671563  697202 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-600852 calico-600852
	I1207 23:37:35.726680  697202 network_create.go:108] docker network calico-600852 192.168.85.0/24 created
	I1207 23:37:35.726710  697202 kic.go:121] calculated static IP "192.168.85.2" for the "calico-600852" container
	I1207 23:37:35.726800  697202 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1207 23:37:35.746383  697202 cli_runner.go:164] Run: docker volume create calico-600852 --label name.minikube.sigs.k8s.io=calico-600852 --label created_by.minikube.sigs.k8s.io=true
	I1207 23:37:35.767378  697202 oci.go:103] Successfully created a docker volume calico-600852
	I1207 23:37:35.767464  697202 cli_runner.go:164] Run: docker run --rm --name calico-600852-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-600852 --entrypoint /usr/bin/test -v calico-600852:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1207 23:37:36.214624  697202 oci.go:107] Successfully prepared a docker volume calico-600852
	I1207 23:37:36.214699  697202 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:37:36.214712  697202 kic.go:194] Starting extracting preloaded images to volume ...
	I1207 23:37:36.214806  697202 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-600852:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1207 23:37:35.674275  697240 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1207 23:37:35.674596  697240 start.go:159] libmachine.API.Create for "custom-flannel-600852" (driver="docker")
	I1207 23:37:35.674633  697240 client.go:173] LocalClient.Create starting
	I1207 23:37:35.674706  697240 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem
	I1207 23:37:35.674738  697240 main.go:143] libmachine: Decoding PEM data...
	I1207 23:37:35.674757  697240 main.go:143] libmachine: Parsing certificate...
	I1207 23:37:35.674823  697240 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem
	I1207 23:37:35.674848  697240 main.go:143] libmachine: Decoding PEM data...
	I1207 23:37:35.674858  697240 main.go:143] libmachine: Parsing certificate...
	I1207 23:37:35.675226  697240 cli_runner.go:164] Run: docker network inspect custom-flannel-600852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1207 23:37:35.694511  697240 cli_runner.go:211] docker network inspect custom-flannel-600852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1207 23:37:35.694611  697240 network_create.go:284] running [docker network inspect custom-flannel-600852] to gather additional debugging logs...
	I1207 23:37:35.694640  697240 cli_runner.go:164] Run: docker network inspect custom-flannel-600852
	W1207 23:37:35.714511  697240 cli_runner.go:211] docker network inspect custom-flannel-600852 returned with exit code 1
	I1207 23:37:35.714550  697240 network_create.go:287] error running [docker network inspect custom-flannel-600852]: docker network inspect custom-flannel-600852: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-600852 not found
	I1207 23:37:35.714572  697240 network_create.go:289] output of [docker network inspect custom-flannel-600852]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-600852 not found
	
	** /stderr **
	I1207 23:37:35.714707  697240 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:37:35.733767  697240 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-918c8f4f6e86 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:f0:02:fe:94:4b} reservation:<nil>}
	I1207 23:37:35.734778  697240 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ce07fb07c16c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:d2:35:46:a2:0a} reservation:<nil>}
	I1207 23:37:35.735418  697240 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f198eadca31e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:79:39:d6:10:dc} reservation:<nil>}
	I1207 23:37:35.736182  697240 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-2feb264898ec IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:86:57:43:7d:13:a7} reservation:<nil>}
	I1207 23:37:35.737085  697240 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-195088d2e9e3 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:7a:71:26:87:28:da} reservation:<nil>}
	I1207 23:37:35.737835  697240 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-217dc275cbc6 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:a2:b0:5a:0f:49:91} reservation:<nil>}
	I1207 23:37:35.738645  697240 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e3de60}
	I1207 23:37:35.738676  697240 network_create.go:124] attempt to create docker network custom-flannel-600852 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1207 23:37:35.738722  697240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-600852 custom-flannel-600852
	I1207 23:37:35.791482  697240 network_create.go:108] docker network custom-flannel-600852 192.168.103.0/24 created
	I1207 23:37:35.791527  697240 kic.go:121] calculated static IP "192.168.103.2" for the "custom-flannel-600852" container
	I1207 23:37:35.791606  697240 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1207 23:37:35.813315  697240 cli_runner.go:164] Run: docker volume create custom-flannel-600852 --label name.minikube.sigs.k8s.io=custom-flannel-600852 --label created_by.minikube.sigs.k8s.io=true
	I1207 23:37:35.834796  697240 oci.go:103] Successfully created a docker volume custom-flannel-600852
	I1207 23:37:35.834882  697240 cli_runner.go:164] Run: docker run --rm --name custom-flannel-600852-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-600852 --entrypoint /usr/bin/test -v custom-flannel-600852:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1207 23:37:36.263573  697240 oci.go:107] Successfully prepared a docker volume custom-flannel-600852
	I1207 23:37:36.263657  697240 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:37:36.263673  697240 kic.go:194] Starting extracting preloaded images to volume ...
	I1207 23:37:36.263770  697240 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-600852:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	W1207 23:37:38.956943  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:40.990084  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:41.532651  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	W1207 23:37:44.032204  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	I1207 23:37:42.339866  697202 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-600852:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (6.125015735s)
	I1207 23:37:42.339900  697202 kic.go:203] duration metric: took 6.125183748s to extract preloaded images to volume ...
	W1207 23:37:42.340009  697202 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1207 23:37:42.340046  697202 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1207 23:37:42.340094  697202 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1207 23:37:42.404789  697202 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-600852 --name calico-600852 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-600852 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-600852 --network calico-600852 --ip 192.168.85.2 --volume calico-600852:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1207 23:37:42.823076  697202 cli_runner.go:164] Run: docker container inspect calico-600852 --format={{.State.Running}}
	I1207 23:37:42.844245  697202 cli_runner.go:164] Run: docker container inspect calico-600852 --format={{.State.Status}}
	I1207 23:37:42.869450  697202 cli_runner.go:164] Run: docker exec calico-600852 stat /var/lib/dpkg/alternatives/iptables
	I1207 23:37:42.927150  697202 oci.go:144] the created container "calico-600852" has a running status.
	I1207 23:37:42.927262  697202 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/calico-600852/id_rsa...
	I1207 23:37:42.993985  697202 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-389542/.minikube/machines/calico-600852/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1207 23:37:43.027596  697202 cli_runner.go:164] Run: docker container inspect calico-600852 --format={{.State.Status}}
	I1207 23:37:43.058350  697202 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1207 23:37:43.058376  697202 kic_runner.go:114] Args: [docker exec --privileged calico-600852 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1207 23:37:43.124558  697202 cli_runner.go:164] Run: docker container inspect calico-600852 --format={{.State.Status}}
	I1207 23:37:43.150157  697202 machine.go:94] provisionDockerMachine start ...
	I1207 23:37:43.150254  697202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-600852
	I1207 23:37:43.173856  697202 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:43.174237  697202 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1207 23:37:43.174284  697202 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:37:43.175136  697202 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59490->127.0.0.1:33493: read: connection reset by peer
	I1207 23:37:42.340199  697240 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-600852:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (6.076357797s)
	I1207 23:37:42.340228  697240 kic.go:203] duration metric: took 6.076550828s to extract preloaded images to volume ...
	W1207 23:37:42.340312  697240 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1207 23:37:42.340376  697240 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1207 23:37:42.340433  697240 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1207 23:37:42.404804  697240 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-600852 --name custom-flannel-600852 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-600852 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-600852 --network custom-flannel-600852 --ip 192.168.103.2 --volume custom-flannel-600852:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1207 23:37:42.706514  697240 cli_runner.go:164] Run: docker container inspect custom-flannel-600852 --format={{.State.Running}}
	I1207 23:37:42.725193  697240 cli_runner.go:164] Run: docker container inspect custom-flannel-600852 --format={{.State.Status}}
	I1207 23:37:42.746701  697240 cli_runner.go:164] Run: docker exec custom-flannel-600852 stat /var/lib/dpkg/alternatives/iptables
	I1207 23:37:42.796848  697240 oci.go:144] the created container "custom-flannel-600852" has a running status.
	I1207 23:37:42.796890  697240 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/custom-flannel-600852/id_rsa...
	I1207 23:37:42.928016  697240 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-389542/.minikube/machines/custom-flannel-600852/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1207 23:37:42.967592  697240 cli_runner.go:164] Run: docker container inspect custom-flannel-600852 --format={{.State.Status}}
	I1207 23:37:43.000865  697240 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1207 23:37:43.000893  697240 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-600852 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1207 23:37:43.067539  697240 cli_runner.go:164] Run: docker container inspect custom-flannel-600852 --format={{.State.Status}}
	I1207 23:37:43.094592  697240 machine.go:94] provisionDockerMachine start ...
	I1207 23:37:43.094686  697240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-600852
	I1207 23:37:43.124045  697240 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:43.125001  697240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1207 23:37:43.125028  697240 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:37:43.279170  697240 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-600852
	
	I1207 23:37:43.279198  697240 ubuntu.go:182] provisioning hostname "custom-flannel-600852"
	I1207 23:37:43.279267  697240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-600852
	I1207 23:37:43.301219  697240 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:43.301912  697240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1207 23:37:43.301982  697240 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-600852 && echo "custom-flannel-600852" | sudo tee /etc/hostname
	I1207 23:37:43.449441  697240 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-600852
	
	I1207 23:37:43.449535  697240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-600852
	I1207 23:37:43.472066  697240 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:43.472395  697240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1207 23:37:43.472440  697240 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-600852' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-600852/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-600852' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:37:43.603212  697240 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:37:43.603253  697240 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:37:43.603290  697240 ubuntu.go:190] setting up certificates
	I1207 23:37:43.603306  697240 provision.go:84] configureAuth start
	I1207 23:37:43.603388  697240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-600852
	I1207 23:37:43.623316  697240 provision.go:143] copyHostCerts
	I1207 23:37:43.623426  697240 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:37:43.623446  697240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:37:43.623540  697240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:37:43.623668  697240 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:37:43.623680  697240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:37:43.623726  697240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:37:43.623860  697240 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:37:43.623877  697240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:37:43.623917  697240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:37:43.624024  697240 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-600852 san=[127.0.0.1 192.168.103.2 custom-flannel-600852 localhost minikube]
	I1207 23:37:43.702486  697240 provision.go:177] copyRemoteCerts
	I1207 23:37:43.702564  697240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:37:43.702613  697240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-600852
	I1207 23:37:43.721075  697240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/custom-flannel-600852/id_rsa Username:docker}
	I1207 23:37:43.816180  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:37:43.837404  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1207 23:37:43.855233  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 23:37:43.874256  697240 provision.go:87] duration metric: took 270.933131ms to configureAuth
	I1207 23:37:43.874285  697240 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:37:43.874488  697240 config.go:182] Loaded profile config "custom-flannel-600852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:37:43.874601  697240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-600852
	I1207 23:37:43.892175  697240 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:43.892426  697240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1207 23:37:43.892445  697240 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:37:44.168099  697240 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:37:44.168124  697240 machine.go:97] duration metric: took 1.073506601s to provisionDockerMachine
	I1207 23:37:44.168137  697240 client.go:176] duration metric: took 8.493496154s to LocalClient.Create
	I1207 23:37:44.168161  697240 start.go:167] duration metric: took 8.49356644s to libmachine.API.Create "custom-flannel-600852"
	I1207 23:37:44.168171  697240 start.go:293] postStartSetup for "custom-flannel-600852" (driver="docker")
	I1207 23:37:44.168186  697240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:37:44.168251  697240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:37:44.168300  697240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-600852
	I1207 23:37:44.187533  697240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/custom-flannel-600852/id_rsa Username:docker}
	I1207 23:37:44.285708  697240 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:37:44.289278  697240 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:37:44.289311  697240 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:37:44.289345  697240 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:37:44.289418  697240 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:37:44.289571  697240 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:37:44.289703  697240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:37:44.297700  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:37:44.318296  697240 start.go:296] duration metric: took 150.110422ms for postStartSetup
	I1207 23:37:44.318665  697240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-600852
	I1207 23:37:44.336837  697240 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/config.json ...
	I1207 23:37:44.337147  697240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:37:44.337202  697240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-600852
	I1207 23:37:44.355307  697240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/custom-flannel-600852/id_rsa Username:docker}
	I1207 23:37:44.445690  697240 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:37:44.450318  697240 start.go:128] duration metric: took 8.778116889s to createHost
	I1207 23:37:44.450362  697240 start.go:83] releasing machines lock for "custom-flannel-600852", held for 8.778286664s
	I1207 23:37:44.450440  697240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-600852
	I1207 23:37:44.469538  697240 ssh_runner.go:195] Run: cat /version.json
	I1207 23:37:44.469584  697240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-600852
	I1207 23:37:44.469610  697240 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:37:44.469678  697240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-600852
	I1207 23:37:44.487668  697240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/custom-flannel-600852/id_rsa Username:docker}
	I1207 23:37:44.488859  697240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/custom-flannel-600852/id_rsa Username:docker}
	I1207 23:37:44.634686  697240 ssh_runner.go:195] Run: systemctl --version
	I1207 23:37:44.641623  697240 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:37:44.677161  697240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:37:44.681818  697240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:37:44.681886  697240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:37:44.708292  697240 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 23:37:44.708318  697240 start.go:496] detecting cgroup driver to use...
	I1207 23:37:44.708378  697240 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:37:44.708427  697240 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:37:44.725043  697240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:37:44.737747  697240 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:37:44.737811  697240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:37:44.754229  697240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:37:44.771728  697240 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:37:44.856910  697240 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:37:44.946607  697240 docker.go:234] disabling docker service ...
	I1207 23:37:44.946683  697240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:37:44.965739  697240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:37:44.978062  697240 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:37:45.065191  697240 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:37:45.152212  697240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:37:45.165142  697240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:37:45.179683  697240 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:37:45.179755  697240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:45.190176  697240 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:37:45.190240  697240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:45.199541  697240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:45.208651  697240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:45.217593  697240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:37:45.226681  697240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:45.236063  697240 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:45.250399  697240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:45.259658  697240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:37:45.267649  697240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:37:45.275381  697240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:37:45.355276  697240 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:37:45.510350  697240 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:37:45.510416  697240 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:37:45.514689  697240 start.go:564] Will wait 60s for crictl version
	I1207 23:37:45.514755  697240 ssh_runner.go:195] Run: which crictl
	I1207 23:37:45.518370  697240 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:37:45.545589  697240 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:37:45.545682  697240 ssh_runner.go:195] Run: crio --version
	I1207 23:37:45.574304  697240 ssh_runner.go:195] Run: crio --version
	I1207 23:37:45.604889  697240 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1207 23:37:45.606350  697240 cli_runner.go:164] Run: docker network inspect custom-flannel-600852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:37:45.625246  697240 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1207 23:37:45.629591  697240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:37:45.640950  697240 kubeadm.go:884] updating cluster {Name:custom-flannel-600852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-600852 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCore
DNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:37:45.641114  697240 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:37:45.641163  697240 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:37:45.675335  697240 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:37:45.675361  697240 crio.go:433] Images already preloaded, skipping extraction
	I1207 23:37:45.675409  697240 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:37:45.702412  697240 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:37:45.702437  697240 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:37:45.702447  697240 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1207 23:37:45.702550  697240 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=custom-flannel-600852 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1207 23:37:45.702632  697240 ssh_runner.go:195] Run: crio config
	I1207 23:37:45.749943  697240 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1207 23:37:45.749986  697240 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 23:37:45.750007  697240 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-600852 NodeName:custom-flannel-600852 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:37:45.750119  697240 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-600852"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:37:45.750180  697240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:37:45.758753  697240 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:37:45.758837  697240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 23:37:45.767296  697240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1207 23:37:45.780180  697240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:37:45.795843  697240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1207 23:37:45.809254  697240 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1207 23:37:45.813068  697240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:37:45.823824  697240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:37:45.922810  697240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:37:45.954280  697240 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852 for IP: 192.168.103.2
	I1207 23:37:45.954305  697240 certs.go:195] generating shared ca certs ...
	I1207 23:37:45.954350  697240 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:45.954525  697240 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:37:45.954583  697240 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:37:45.954599  697240 certs.go:257] generating profile certs ...
	I1207 23:37:45.954671  697240 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/client.key
	I1207 23:37:45.954687  697240 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/client.crt with IP's: []
	I1207 23:37:46.026656  697240 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/client.crt ...
	I1207 23:37:46.026709  697240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/client.crt: {Name:mk8a9624c431cb6edf9711331cdf2043026fc87f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:46.026910  697240 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/client.key ...
	I1207 23:37:46.026929  697240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/client.key: {Name:mk32c7d69ac8b7e73f5693f03228f28056e7f2f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:46.027044  697240 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.key.835b6359
	I1207 23:37:46.027066  697240 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.crt.835b6359 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1207 23:37:46.119650  697240 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.crt.835b6359 ...
	I1207 23:37:46.119682  697240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.crt.835b6359: {Name:mk877482c89ea5c11c3d56ef01d2dd1d5ef365ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:46.119871  697240 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.key.835b6359 ...
	I1207 23:37:46.119894  697240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.key.835b6359: {Name:mked33bc8da818f53bc50f0f0e4ef36a5189fa9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:46.120005  697240 certs.go:382] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.crt.835b6359 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.crt
	I1207 23:37:46.120100  697240 certs.go:386] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.key.835b6359 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.key
	I1207 23:37:46.120186  697240 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/proxy-client.key
	I1207 23:37:46.120209  697240 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/proxy-client.crt with IP's: []
	I1207 23:37:46.177708  697240 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/proxy-client.crt ...
	I1207 23:37:46.177738  697240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/proxy-client.crt: {Name:mkd8e414040685141640fecdc73a2a45affca604 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:46.177898  697240 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/proxy-client.key ...
	I1207 23:37:46.177910  697240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/proxy-client.key: {Name:mk0f72864b00bc12cd813e39176c3793627ff229 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:46.178092  697240 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:37:46.178131  697240 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:37:46.178145  697240 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:37:46.178169  697240 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:37:46.178196  697240 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:37:46.178229  697240 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:37:46.178275  697240 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:37:46.178880  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:37:46.198288  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:37:46.217010  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:37:46.237272  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:37:46.258805  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1207 23:37:46.276861  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 23:37:46.294322  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:37:46.312873  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 23:37:46.332964  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:37:46.354112  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:37:46.373402  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:37:46.392365  697240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:37:46.406030  697240 ssh_runner.go:195] Run: openssl version
	I1207 23:37:46.412665  697240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:37:46.420869  697240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:37:46.429539  697240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:37:46.433803  697240 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:37:46.433871  697240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:37:46.472505  697240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:37:46.480520  697240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3931252.pem /etc/ssl/certs/3ec20f2e.0
	I1207 23:37:46.489387  697240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:46.497259  697240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:37:46.505422  697240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:46.509827  697240 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:46.509892  697240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:46.551231  697240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:37:46.559886  697240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1207 23:37:46.568529  697240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:37:46.576298  697240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:37:46.584235  697240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:37:46.588371  697240 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:37:46.588429  697240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:37:46.625148  697240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:37:46.633370  697240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/393125.pem /etc/ssl/certs/51391683.0
	I1207 23:37:46.642224  697240 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:37:46.646106  697240 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1207 23:37:46.646168  697240 kubeadm.go:401] StartCluster: {Name:custom-flannel-600852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:37:46.646259  697240 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:37:46.646300  697240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:37:46.674079  697240 cri.go:89] found id: ""
	I1207 23:37:46.674144  697240 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:37:46.682721  697240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 23:37:46.691528  697240 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1207 23:37:46.691593  697240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 23:37:46.699787  697240 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 23:37:46.699811  697240 kubeadm.go:158] found existing configuration files:
	
	I1207 23:37:46.699862  697240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1207 23:37:46.708052  697240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1207 23:37:46.708115  697240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1207 23:37:46.715867  697240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1207 23:37:46.723726  697240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1207 23:37:46.723796  697240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1207 23:37:46.731434  697240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1207 23:37:46.739410  697240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1207 23:37:46.739464  697240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1207 23:37:46.747221  697240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1207 23:37:46.755432  697240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1207 23:37:46.755490  697240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1207 23:37:46.763543  697240 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1207 23:37:46.806142  697240 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1207 23:37:46.806210  697240 kubeadm.go:319] [preflight] Running pre-flight checks
	I1207 23:37:46.827979  697240 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1207 23:37:46.828044  697240 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1207 23:37:46.828121  697240 kubeadm.go:319] OS: Linux
	I1207 23:37:46.828203  697240 kubeadm.go:319] CGROUPS_CPU: enabled
	I1207 23:37:46.828264  697240 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1207 23:37:46.828390  697240 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1207 23:37:46.828463  697240 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1207 23:37:46.828532  697240 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1207 23:37:46.828596  697240 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1207 23:37:46.828675  697240 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1207 23:37:46.828743  697240 kubeadm.go:319] CGROUPS_IO: enabled
	I1207 23:37:46.892537  697240 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 23:37:46.892692  697240 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 23:37:46.892875  697240 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1207 23:37:46.900999  697240 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1207 23:37:43.457256  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:45.957159  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	I1207 23:37:46.307553  697202 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-600852
	
	I1207 23:37:46.307583  697202 ubuntu.go:182] provisioning hostname "calico-600852"
	I1207 23:37:46.307668  697202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-600852
	I1207 23:37:46.327449  697202 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:46.327772  697202 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1207 23:37:46.327804  697202 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-600852 && echo "calico-600852" | sudo tee /etc/hostname
	I1207 23:37:46.468528  697202 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-600852
	
	I1207 23:37:46.468628  697202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-600852
	I1207 23:37:46.488224  697202 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:46.488531  697202 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1207 23:37:46.488550  697202 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-600852' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-600852/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-600852' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:37:46.620573  697202 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:37:46.620605  697202 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:37:46.620634  697202 ubuntu.go:190] setting up certificates
	I1207 23:37:46.620647  697202 provision.go:84] configureAuth start
	I1207 23:37:46.620717  697202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-600852
	I1207 23:37:46.641379  697202 provision.go:143] copyHostCerts
	I1207 23:37:46.641462  697202 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:37:46.641480  697202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:37:46.641550  697202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:37:46.641663  697202 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:37:46.641674  697202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:37:46.641713  697202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:37:46.641806  697202 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:37:46.641817  697202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:37:46.641853  697202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:37:46.641928  697202 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.calico-600852 san=[127.0.0.1 192.168.85.2 calico-600852 localhost minikube]
	I1207 23:37:46.747911  697202 provision.go:177] copyRemoteCerts
	I1207 23:37:46.747964  697202 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:37:46.748001  697202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-600852
	I1207 23:37:46.769238  697202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/calico-600852/id_rsa Username:docker}
	I1207 23:37:46.867766  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:37:46.889480  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1207 23:37:46.909394  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 23:37:46.927052  697202 provision.go:87] duration metric: took 306.387989ms to configureAuth
	I1207 23:37:46.927083  697202 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:37:46.927305  697202 config.go:182] Loaded profile config "calico-600852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:37:46.927435  697202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-600852
	I1207 23:37:46.946235  697202 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:46.946559  697202 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1207 23:37:46.946589  697202 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:37:47.229624  697202 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:37:47.229674  697202 machine.go:97] duration metric: took 4.079494081s to provisionDockerMachine
	I1207 23:37:47.229686  697202 client.go:176] duration metric: took 11.620532329s to LocalClient.Create
	I1207 23:37:47.229702  697202 start.go:167] duration metric: took 11.62060605s to libmachine.API.Create "calico-600852"
	I1207 23:37:47.229712  697202 start.go:293] postStartSetup for "calico-600852" (driver="docker")
	I1207 23:37:47.229721  697202 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:37:47.229778  697202 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:37:47.229815  697202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-600852
	I1207 23:37:47.249083  697202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/calico-600852/id_rsa Username:docker}
	I1207 23:37:47.344551  697202 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:37:47.348117  697202 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:37:47.348146  697202 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:37:47.348158  697202 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:37:47.348222  697202 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:37:47.348460  697202 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:37:47.348622  697202 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:37:47.356612  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:37:47.377303  697202 start.go:296] duration metric: took 147.575453ms for postStartSetup
	I1207 23:37:47.377718  697202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-600852
	I1207 23:37:47.396703  697202 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/config.json ...
	I1207 23:37:47.396965  697202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:37:47.397004  697202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-600852
	I1207 23:37:47.416235  697202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/calico-600852/id_rsa Username:docker}
	I1207 23:37:47.508442  697202 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:37:47.513073  697202 start.go:128] duration metric: took 11.90632642s to createHost
	I1207 23:37:47.513101  697202 start.go:83] releasing machines lock for "calico-600852", held for 11.906484487s
	I1207 23:37:47.513171  697202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-600852
	I1207 23:37:47.534994  697202 ssh_runner.go:195] Run: cat /version.json
	I1207 23:37:47.535047  697202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-600852
	I1207 23:37:47.535080  697202 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:37:47.535162  697202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-600852
	I1207 23:37:47.556062  697202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/calico-600852/id_rsa Username:docker}
	I1207 23:37:47.558419  697202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/calico-600852/id_rsa Username:docker}
	I1207 23:37:47.707730  697202 ssh_runner.go:195] Run: systemctl --version
	I1207 23:37:47.715115  697202 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:37:47.752265  697202 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:37:47.757322  697202 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:37:47.757405  697202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:37:47.784815  697202 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 23:37:47.784840  697202 start.go:496] detecting cgroup driver to use...
	I1207 23:37:47.784871  697202 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:37:47.784919  697202 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:37:47.801532  697202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:37:47.814605  697202 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:37:47.814675  697202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:37:47.832123  697202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:37:47.851225  697202 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:37:47.938152  697202 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:37:48.033487  697202 docker.go:234] disabling docker service ...
	I1207 23:37:48.033552  697202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:37:48.053822  697202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:37:48.067528  697202 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:37:48.158030  697202 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:37:48.250048  697202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:37:48.265676  697202 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:37:48.281951  697202 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:37:48.282045  697202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:48.294128  697202 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:37:48.294197  697202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:48.304720  697202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:48.314523  697202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:48.324484  697202 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:37:48.333594  697202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:48.342651  697202 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:48.356711  697202 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:48.366809  697202 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:37:48.375853  697202 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:37:48.383607  697202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:37:48.480441  697202 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:37:48.621673  697202 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:37:48.621751  697202 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:37:48.626007  697202 start.go:564] Will wait 60s for crictl version
	I1207 23:37:48.626080  697202 ssh_runner.go:195] Run: which crictl
	I1207 23:37:48.629792  697202 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:37:48.656613  697202 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:37:48.656696  697202 ssh_runner.go:195] Run: crio --version
	I1207 23:37:48.688631  697202 ssh_runner.go:195] Run: crio --version
	I1207 23:37:48.721122  697202 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1207 23:37:46.032763  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	W1207 23:37:48.033350  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	I1207 23:37:46.903106  697240 out.go:252]   - Generating certificates and keys ...
	I1207 23:37:46.903221  697240 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1207 23:37:46.903346  697240 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1207 23:37:47.057702  697240 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 23:37:47.145711  697240 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1207 23:37:47.283656  697240 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1207 23:37:47.443811  697240 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1207 23:37:47.627502  697240 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1207 23:37:47.627687  697240 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-600852 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1207 23:37:47.741433  697240 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1207 23:37:47.741596  697240 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-600852 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1207 23:37:48.250407  697240 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 23:37:48.484094  697240 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 23:37:48.700165  697240 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1207 23:37:48.700264  697240 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 23:37:48.796665  697240 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 23:37:48.985875  697240 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 23:37:49.253932  697240 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 23:37:49.612996  697240 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 23:37:49.806151  697240 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 23:37:49.806958  697240 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 23:37:49.812529  697240 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 23:37:48.722395  697202 cli_runner.go:164] Run: docker network inspect calico-600852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:37:48.741546  697202 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1207 23:37:48.746060  697202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:37:48.757152  697202 kubeadm.go:884] updating cluster {Name:calico-600852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:37:48.757291  697202 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:37:48.757402  697202 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:37:48.794158  697202 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:37:48.794184  697202 crio.go:433] Images already preloaded, skipping extraction
	I1207 23:37:48.794238  697202 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:37:48.822171  697202 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:37:48.822198  697202 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:37:48.822209  697202 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1207 23:37:48.822350  697202 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-600852 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:calico-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1207 23:37:48.822434  697202 ssh_runner.go:195] Run: crio config
	I1207 23:37:48.883878  697202 cni.go:84] Creating CNI manager for "calico"
	I1207 23:37:48.883916  697202 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 23:37:48.883939  697202 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-600852 NodeName:calico-600852 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:37:48.884071  697202 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-600852"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:37:48.884151  697202 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:37:48.893454  697202 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:37:48.893533  697202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 23:37:48.902242  697202 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1207 23:37:48.915810  697202 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:37:48.933015  697202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1207 23:37:48.946732  697202 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1207 23:37:48.950652  697202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:37:48.961546  697202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:37:49.058120  697202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:37:49.089310  697202 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852 for IP: 192.168.85.2
	I1207 23:37:49.089345  697202 certs.go:195] generating shared ca certs ...
	I1207 23:37:49.089375  697202 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:49.089643  697202 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:37:49.089703  697202 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:37:49.089716  697202 certs.go:257] generating profile certs ...
	I1207 23:37:49.089787  697202 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/client.key
	I1207 23:37:49.089803  697202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/client.crt with IP's: []
	I1207 23:37:49.315583  697202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/client.crt ...
	I1207 23:37:49.315613  697202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/client.crt: {Name:mk9246e4e51936452e13c158ca3debae4b8fa078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:49.315809  697202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/client.key ...
	I1207 23:37:49.315829  697202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/client.key: {Name:mk41c8ae6d14eb827fe4a8440f28a3f158fd7879 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:49.315965  697202 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.key.bc22f359
	I1207 23:37:49.315983  697202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.crt.bc22f359 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1207 23:37:49.439672  697202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.crt.bc22f359 ...
	I1207 23:37:49.439705  697202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.crt.bc22f359: {Name:mkc553384c69fb61ba71740d3335de3cab4fd14c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:49.439893  697202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.key.bc22f359 ...
	I1207 23:37:49.439907  697202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.key.bc22f359: {Name:mkc2e1d0bfa6b237c0b447d1a8825119b2d2ef05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:49.439979  697202 certs.go:382] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.crt.bc22f359 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.crt
	I1207 23:37:49.440076  697202 certs.go:386] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.key.bc22f359 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.key
	I1207 23:37:49.440155  697202 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/proxy-client.key
	I1207 23:37:49.440172  697202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/proxy-client.crt with IP's: []
	I1207 23:37:49.514150  697202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/proxy-client.crt ...
	I1207 23:37:49.514178  697202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/proxy-client.crt: {Name:mk97b1a455a2b9bfb030964cd6977408a79040a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:49.514368  697202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/proxy-client.key ...
	I1207 23:37:49.514384  697202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/proxy-client.key: {Name:mkf9f4cc3cb828ff6ef08a4aca0bf7b4c1aa7539 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:49.514604  697202 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:37:49.514646  697202 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:37:49.514657  697202 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:37:49.514680  697202 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:37:49.514704  697202 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:37:49.514733  697202 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:37:49.514772  697202 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:37:49.515456  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:37:49.535595  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:37:49.554113  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:37:49.572786  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:37:49.590631  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1207 23:37:49.608351  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 23:37:49.626842  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:37:49.646114  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 23:37:49.666129  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:37:49.686982  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:37:49.705972  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:37:49.724847  697202 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:37:49.738191  697202 ssh_runner.go:195] Run: openssl version
	I1207 23:37:49.744883  697202 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:49.752662  697202 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:37:49.761172  697202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:49.766501  697202 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:49.766563  697202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:49.807755  697202 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:37:49.817966  697202 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1207 23:37:49.826049  697202 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:37:49.834554  697202 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:37:49.843290  697202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:37:49.847578  697202 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:37:49.847639  697202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:37:49.890621  697202 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:37:49.898845  697202 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/393125.pem /etc/ssl/certs/51391683.0
	I1207 23:37:49.907068  697202 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:37:49.915401  697202 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:37:49.923873  697202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:37:49.928171  697202 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:37:49.928235  697202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:37:49.966142  697202 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:37:49.974965  697202 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3931252.pem /etc/ssl/certs/3ec20f2e.0
	I1207 23:37:49.984229  697202 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:37:49.988486  697202 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1207 23:37:49.988558  697202 kubeadm.go:401] StartCluster: {Name:calico-600852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:37:49.988634  697202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:37:49.988698  697202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:37:50.018495  697202 cri.go:89] found id: ""
	I1207 23:37:50.018561  697202 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:37:50.027140  697202 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 23:37:50.037313  697202 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1207 23:37:50.037409  697202 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 23:37:50.045834  697202 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 23:37:50.045857  697202 kubeadm.go:158] found existing configuration files:
	
	I1207 23:37:50.045901  697202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1207 23:37:50.054279  697202 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1207 23:37:50.054452  697202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1207 23:37:50.063032  697202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1207 23:37:50.071466  697202 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1207 23:37:50.071529  697202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1207 23:37:50.079305  697202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1207 23:37:50.087412  697202 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1207 23:37:50.087483  697202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1207 23:37:50.095229  697202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1207 23:37:50.103530  697202 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1207 23:37:50.103596  697202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1207 23:37:50.111962  697202 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1207 23:37:50.153270  697202 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1207 23:37:50.153384  697202 kubeadm.go:319] [preflight] Running pre-flight checks
	I1207 23:37:50.174426  697202 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1207 23:37:50.174507  697202 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1207 23:37:50.174577  697202 kubeadm.go:319] OS: Linux
	I1207 23:37:50.174671  697202 kubeadm.go:319] CGROUPS_CPU: enabled
	I1207 23:37:50.174741  697202 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1207 23:37:50.174806  697202 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1207 23:37:50.174852  697202 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1207 23:37:50.174923  697202 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1207 23:37:50.174999  697202 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1207 23:37:50.175089  697202 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1207 23:37:50.175160  697202 kubeadm.go:319] CGROUPS_IO: enabled
	I1207 23:37:50.241958  697202 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 23:37:50.242108  697202 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 23:37:50.242249  697202 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1207 23:37:50.249773  697202 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 23:37:50.256443  697202 out.go:252]   - Generating certificates and keys ...
	I1207 23:37:50.256557  697202 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1207 23:37:50.256645  697202 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1207 23:37:49.814207  697240 out.go:252]   - Booting up control plane ...
	I1207 23:37:49.814372  697240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 23:37:49.814485  697240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 23:37:49.816445  697240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 23:37:49.831652  697240 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 23:37:49.831898  697240 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1207 23:37:49.840067  697240 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1207 23:37:49.840501  697240 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 23:37:49.840568  697240 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1207 23:37:49.955871  697240 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1207 23:37:49.956042  697240 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1207 23:37:48.457062  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:50.957365  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:50.533839  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	I1207 23:37:51.533645  687309 pod_ready.go:94] pod "coredns-66bc5c9577-p4v2f" is "Ready"
	I1207 23:37:51.533680  687309 pod_ready.go:86] duration metric: took 35.506908878s for pod "coredns-66bc5c9577-p4v2f" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:51.536614  687309 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:51.540857  687309 pod_ready.go:94] pod "etcd-default-k8s-diff-port-312944" is "Ready"
	I1207 23:37:51.540881  687309 pod_ready.go:86] duration metric: took 4.240955ms for pod "etcd-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:51.542925  687309 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:51.546931  687309 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-312944" is "Ready"
	I1207 23:37:51.546955  687309 pod_ready.go:86] duration metric: took 4.009116ms for pod "kube-apiserver-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:51.548947  687309 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:51.733165  687309 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-312944" is "Ready"
	I1207 23:37:51.733197  687309 pod_ready.go:86] duration metric: took 184.229643ms for pod "kube-controller-manager-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:51.931764  687309 pod_ready.go:83] waiting for pod "kube-proxy-7stg5" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:52.330433  687309 pod_ready.go:94] pod "kube-proxy-7stg5" is "Ready"
	I1207 23:37:52.330464  687309 pod_ready.go:86] duration metric: took 398.673038ms for pod "kube-proxy-7stg5" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:52.532189  687309 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:52.930982  687309 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-312944" is "Ready"
	I1207 23:37:52.931018  687309 pod_ready.go:86] duration metric: took 398.79821ms for pod "kube-scheduler-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:52.931033  687309 pod_ready.go:40] duration metric: took 36.908392392s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:37:52.982802  687309 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1207 23:37:52.984436  687309 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-312944" cluster and "default" namespace by default
	I1207 23:37:50.421391  697202 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 23:37:50.554773  697202 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1207 23:37:50.658025  697202 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1207 23:37:50.866863  697202 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1207 23:37:51.050985  697202 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1207 23:37:51.051159  697202 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-600852 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1207 23:37:51.209204  697202 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1207 23:37:51.209612  697202 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-600852 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1207 23:37:51.560636  697202 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 23:37:51.846163  697202 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 23:37:52.203239  697202 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1207 23:37:52.203576  697202 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 23:37:52.580235  697202 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 23:37:52.898080  697202 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 23:37:53.671477  697202 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 23:37:53.753966  697202 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 23:37:53.848471  697202 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 23:37:53.849350  697202 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 23:37:53.853007  697202 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 23:37:50.957502  697240 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001823568s
	I1207 23:37:50.962005  697240 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1207 23:37:50.962147  697240 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1207 23:37:50.962242  697240 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1207 23:37:50.962344  697240 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1207 23:37:52.308872  697240 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.346774352s
	I1207 23:37:52.794936  697240 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.83266571s
	I1207 23:37:54.464007  697240 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501991911s
	I1207 23:37:54.481832  697240 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 23:37:54.506812  697240 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 23:37:54.518357  697240 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 23:37:54.518677  697240 kubeadm.go:319] [mark-control-plane] Marking the node custom-flannel-600852 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 23:37:54.527249  697240 kubeadm.go:319] [bootstrap-token] Using token: 3n01no.dt369lpba9g6frnf
	I1207 23:37:53.857851  697202 out.go:252]   - Booting up control plane ...
	I1207 23:37:53.858009  697202 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 23:37:53.858119  697202 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 23:37:53.858210  697202 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 23:37:53.873295  697202 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 23:37:53.873437  697202 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1207 23:37:53.883292  697202 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1207 23:37:53.883755  697202 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 23:37:53.883864  697202 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1207 23:37:54.007036  697202 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1207 23:37:54.007234  697202 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1207 23:37:55.008696  697202 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001763784s
	I1207 23:37:55.012005  697202 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1207 23:37:55.012134  697202 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1207 23:37:55.012284  697202 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1207 23:37:55.012435  697202 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1207 23:37:54.528778  697240 out.go:252]   - Configuring RBAC rules ...
	I1207 23:37:54.528924  697240 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 23:37:54.532897  697240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 23:37:54.539239  697240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 23:37:54.542220  697240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 23:37:54.544999  697240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 23:37:54.547719  697240 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 23:37:54.871561  697240 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 23:37:55.287125  697240 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1207 23:37:55.871441  697240 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1207 23:37:55.872880  697240 kubeadm.go:319] 
	I1207 23:37:55.873132  697240 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1207 23:37:55.873149  697240 kubeadm.go:319] 
	I1207 23:37:55.873312  697240 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1207 23:37:55.873321  697240 kubeadm.go:319] 
	I1207 23:37:55.873384  697240 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1207 23:37:55.873477  697240 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 23:37:55.873548  697240 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 23:37:55.873561  697240 kubeadm.go:319] 
	I1207 23:37:55.873622  697240 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1207 23:37:55.873629  697240 kubeadm.go:319] 
	I1207 23:37:55.873686  697240 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 23:37:55.873692  697240 kubeadm.go:319] 
	I1207 23:37:55.873749  697240 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1207 23:37:55.874010  697240 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 23:37:55.874128  697240 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 23:37:55.874138  697240 kubeadm.go:319] 
	I1207 23:37:55.874270  697240 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 23:37:55.874416  697240 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1207 23:37:55.874426  697240 kubeadm.go:319] 
	I1207 23:37:55.874544  697240 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3n01no.dt369lpba9g6frnf \
	I1207 23:37:55.874707  697240 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a6f9ffe32c21ad638ebba2743e15f014ccba55b6baef971adb92cbf8edf27a49 \
	I1207 23:37:55.874738  697240 kubeadm.go:319] 	--control-plane 
	I1207 23:37:55.874743  697240 kubeadm.go:319] 
	I1207 23:37:55.874916  697240 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1207 23:37:55.874931  697240 kubeadm.go:319] 
	I1207 23:37:55.875051  697240 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3n01no.dt369lpba9g6frnf \
	I1207 23:37:55.875200  697240 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a6f9ffe32c21ad638ebba2743e15f014ccba55b6baef971adb92cbf8edf27a49 
	I1207 23:37:55.878393  697240 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1207 23:37:55.878573  697240 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 23:37:55.878613  697240 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1207 23:37:55.880468  697240 out.go:179] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	W1207 23:37:52.957500  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:55.457377  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	I1207 23:37:56.706613  697202 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.694503617s
	I1207 23:37:57.076842  697202 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.064835924s
	I1207 23:37:59.013801  697202 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001721609s
	I1207 23:37:59.030204  697202 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 23:37:59.041137  697202 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 23:37:59.051015  697202 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 23:37:59.051275  697202 kubeadm.go:319] [mark-control-plane] Marking the node calico-600852 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 23:37:59.059318  697202 kubeadm.go:319] [bootstrap-token] Using token: k6if0t.dzl4572wdn2qqw88
	I1207 23:37:59.061529  697202 out.go:252]   - Configuring RBAC rules ...
	I1207 23:37:59.061681  697202 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 23:37:59.066129  697202 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 23:37:59.073546  697202 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 23:37:59.076751  697202 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 23:37:59.079275  697202 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 23:37:59.083840  697202 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 23:37:59.420473  697202 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 23:37:59.834460  697202 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1207 23:38:00.419356  697202 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1207 23:38:00.420472  697202 kubeadm.go:319] 
	I1207 23:38:00.420573  697202 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1207 23:38:00.420585  697202 kubeadm.go:319] 
	I1207 23:38:00.420691  697202 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1207 23:38:00.420702  697202 kubeadm.go:319] 
	I1207 23:38:00.420737  697202 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1207 23:38:00.420823  697202 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 23:38:00.420899  697202 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 23:38:00.420908  697202 kubeadm.go:319] 
	I1207 23:38:00.420986  697202 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1207 23:38:00.420996  697202 kubeadm.go:319] 
	I1207 23:38:00.421077  697202 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 23:38:00.421087  697202 kubeadm.go:319] 
	I1207 23:38:00.421161  697202 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1207 23:38:00.421269  697202 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 23:38:00.421397  697202 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 23:38:00.421408  697202 kubeadm.go:319] 
	I1207 23:38:00.421539  697202 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 23:38:00.421651  697202 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1207 23:38:00.421660  697202 kubeadm.go:319] 
	I1207 23:38:00.421770  697202 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token k6if0t.dzl4572wdn2qqw88 \
	I1207 23:38:00.421923  697202 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a6f9ffe32c21ad638ebba2743e15f014ccba55b6baef971adb92cbf8edf27a49 \
	I1207 23:38:00.421956  697202 kubeadm.go:319] 	--control-plane 
	I1207 23:38:00.421965  697202 kubeadm.go:319] 
	I1207 23:38:00.422086  697202 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1207 23:38:00.422095  697202 kubeadm.go:319] 
	I1207 23:38:00.422195  697202 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token k6if0t.dzl4572wdn2qqw88 \
	I1207 23:38:00.422342  697202 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a6f9ffe32c21ad638ebba2743e15f014ccba55b6baef971adb92cbf8edf27a49 
	I1207 23:38:00.425705  697202 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1207 23:38:00.425899  697202 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 23:38:00.425918  697202 cni.go:84] Creating CNI manager for "calico"
	I1207 23:38:00.427838  697202 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1207 23:37:55.881576  697240 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1207 23:37:55.881637  697240 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1207 23:37:55.885946  697240 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1207 23:37:55.885972  697240 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I1207 23:37:55.906284  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1207 23:37:56.308367  697240 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 23:37:56.308416  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:56.308544  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-600852 minikube.k8s.io/updated_at=2025_12_07T23_37_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47 minikube.k8s.io/name=custom-flannel-600852 minikube.k8s.io/primary=true
	I1207 23:37:56.407068  697240 ops.go:34] apiserver oom_adj: -16
	I1207 23:37:56.407202  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:56.907400  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:57.407404  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:57.907560  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:58.408072  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:58.907489  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:59.407858  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:59.907568  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:38:00.407401  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:38:00.483488  697240 kubeadm.go:1114] duration metric: took 4.175116114s to wait for elevateKubeSystemPrivileges
	I1207 23:38:00.483533  697240 kubeadm.go:403] duration metric: took 13.83737016s to StartCluster
	I1207 23:38:00.483556  697240 settings.go:142] acquiring lock: {Name:mk372e79badb9c8f25216fa891cff6dfa96ea2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:38:00.483633  697240 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:38:00.485027  697240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:38:00.485293  697240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 23:38:00.485297  697240 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:38:00.485406  697240 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1207 23:38:00.485500  697240 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-600852"
	I1207 23:38:00.485519  697240 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-600852"
	I1207 23:38:00.485531  697240 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-600852"
	I1207 23:38:00.485546  697240 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-600852"
	I1207 23:38:00.485568  697240 host.go:66] Checking if "custom-flannel-600852" exists ...
	I1207 23:38:00.485526  697240 config.go:182] Loaded profile config "custom-flannel-600852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:38:00.485954  697240 cli_runner.go:164] Run: docker container inspect custom-flannel-600852 --format={{.State.Status}}
	I1207 23:38:00.486106  697240 cli_runner.go:164] Run: docker container inspect custom-flannel-600852 --format={{.State.Status}}
	I1207 23:38:00.486903  697240 out.go:179] * Verifying Kubernetes components...
	I1207 23:38:00.488169  697240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:38:00.512789  697240 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-600852"
	I1207 23:38:00.512842  697240 host.go:66] Checking if "custom-flannel-600852" exists ...
	I1207 23:38:00.513350  697240 cli_runner.go:164] Run: docker container inspect custom-flannel-600852 --format={{.State.Status}}
	I1207 23:38:00.515122  697240 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1207 23:37:57.957064  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	I1207 23:37:58.957197  684670 node_ready.go:49] node "kindnet-600852" is "Ready"
	I1207 23:37:58.957236  684670 node_ready.go:38] duration metric: took 41.503819012s for node "kindnet-600852" to be "Ready" ...
	I1207 23:37:58.957256  684670 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:37:58.957318  684670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:37:58.971503  684670 api_server.go:72] duration metric: took 41.802323361s to wait for apiserver process to appear ...
	I1207 23:37:58.971533  684670 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:37:58.971552  684670 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:37:58.977257  684670 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1207 23:37:58.978268  684670 api_server.go:141] control plane version: v1.34.2
	I1207 23:37:58.978297  684670 api_server.go:131] duration metric: took 6.756228ms to wait for apiserver health ...
	I1207 23:37:58.978308  684670 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:37:58.982434  684670 system_pods.go:59] 8 kube-system pods found
	I1207 23:37:58.982469  684670 system_pods.go:61] "coredns-66bc5c9577-8rwsj" [d85f99d6-a1ba-4cfc-bcdc-aac22ea4af3e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:37:58.982476  684670 system_pods.go:61] "etcd-kindnet-600852" [adf5b308-5358-4d7b-9df5-bafffa61f8b6] Running
	I1207 23:37:58.982482  684670 system_pods.go:61] "kindnet-vzkfg" [87c7cd14-d729-423a-a43f-bdb77eaeba04] Running
	I1207 23:37:58.982485  684670 system_pods.go:61] "kube-apiserver-kindnet-600852" [3c3cfd49-d544-4dfb-bf4f-7894225a944c] Running
	I1207 23:37:58.982488  684670 system_pods.go:61] "kube-controller-manager-kindnet-600852" [502a4d63-dedc-4a8b-a1ea-be9a16e72fb6] Running
	I1207 23:37:58.982493  684670 system_pods.go:61] "kube-proxy-nmxm2" [21011e1c-6722-4e63-9731-1af680bb14f2] Running
	I1207 23:37:58.982496  684670 system_pods.go:61] "kube-scheduler-kindnet-600852" [3193f27f-1ba4-4432-b4d5-7f6af3c32df6] Running
	I1207 23:37:58.982501  684670 system_pods.go:61] "storage-provisioner" [e9d9092f-ca1a-4cf3-bbbd-b284d49b2f12] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:37:58.982508  684670 system_pods.go:74] duration metric: took 4.193925ms to wait for pod list to return data ...
	I1207 23:37:58.982519  684670 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:37:58.985103  684670 default_sa.go:45] found service account: "default"
	I1207 23:37:58.985121  684670 default_sa.go:55] duration metric: took 2.596819ms for default service account to be created ...
	I1207 23:37:58.985130  684670 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 23:37:58.987871  684670 system_pods.go:86] 8 kube-system pods found
	I1207 23:37:58.987899  684670 system_pods.go:89] "coredns-66bc5c9577-8rwsj" [d85f99d6-a1ba-4cfc-bcdc-aac22ea4af3e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:37:58.987905  684670 system_pods.go:89] "etcd-kindnet-600852" [adf5b308-5358-4d7b-9df5-bafffa61f8b6] Running
	I1207 23:37:58.987912  684670 system_pods.go:89] "kindnet-vzkfg" [87c7cd14-d729-423a-a43f-bdb77eaeba04] Running
	I1207 23:37:58.987918  684670 system_pods.go:89] "kube-apiserver-kindnet-600852" [3c3cfd49-d544-4dfb-bf4f-7894225a944c] Running
	I1207 23:37:58.987923  684670 system_pods.go:89] "kube-controller-manager-kindnet-600852" [502a4d63-dedc-4a8b-a1ea-be9a16e72fb6] Running
	I1207 23:37:58.987928  684670 system_pods.go:89] "kube-proxy-nmxm2" [21011e1c-6722-4e63-9731-1af680bb14f2] Running
	I1207 23:37:58.987936  684670 system_pods.go:89] "kube-scheduler-kindnet-600852" [3193f27f-1ba4-4432-b4d5-7f6af3c32df6] Running
	I1207 23:37:58.987943  684670 system_pods.go:89] "storage-provisioner" [e9d9092f-ca1a-4cf3-bbbd-b284d49b2f12] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:37:58.987972  684670 retry.go:31] will retry after 237.710109ms: missing components: kube-dns
	I1207 23:37:59.231977  684670 system_pods.go:86] 8 kube-system pods found
	I1207 23:37:59.232015  684670 system_pods.go:89] "coredns-66bc5c9577-8rwsj" [d85f99d6-a1ba-4cfc-bcdc-aac22ea4af3e] Running
	I1207 23:37:59.232024  684670 system_pods.go:89] "etcd-kindnet-600852" [adf5b308-5358-4d7b-9df5-bafffa61f8b6] Running
	I1207 23:37:59.232029  684670 system_pods.go:89] "kindnet-vzkfg" [87c7cd14-d729-423a-a43f-bdb77eaeba04] Running
	I1207 23:37:59.232038  684670 system_pods.go:89] "kube-apiserver-kindnet-600852" [3c3cfd49-d544-4dfb-bf4f-7894225a944c] Running
	I1207 23:37:59.232044  684670 system_pods.go:89] "kube-controller-manager-kindnet-600852" [502a4d63-dedc-4a8b-a1ea-be9a16e72fb6] Running
	I1207 23:37:59.232050  684670 system_pods.go:89] "kube-proxy-nmxm2" [21011e1c-6722-4e63-9731-1af680bb14f2] Running
	I1207 23:37:59.232053  684670 system_pods.go:89] "kube-scheduler-kindnet-600852" [3193f27f-1ba4-4432-b4d5-7f6af3c32df6] Running
	I1207 23:37:59.232056  684670 system_pods.go:89] "storage-provisioner" [e9d9092f-ca1a-4cf3-bbbd-b284d49b2f12] Running
	I1207 23:37:59.232067  684670 system_pods.go:126] duration metric: took 246.92981ms to wait for k8s-apps to be running ...
	I1207 23:37:59.232078  684670 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:37:59.232138  684670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:37:59.245442  684670 system_svc.go:56] duration metric: took 13.353564ms WaitForService to wait for kubelet
	I1207 23:37:59.245484  684670 kubeadm.go:587] duration metric: took 42.076307711s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:37:59.245510  684670 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:37:59.248701  684670 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:37:59.248728  684670 node_conditions.go:123] node cpu capacity is 8
	I1207 23:37:59.248746  684670 node_conditions.go:105] duration metric: took 3.230624ms to run NodePressure ...
	I1207 23:37:59.248759  684670 start.go:242] waiting for startup goroutines ...
	I1207 23:37:59.248765  684670 start.go:247] waiting for cluster config update ...
	I1207 23:37:59.248776  684670 start.go:256] writing updated cluster config ...
	I1207 23:37:59.249023  684670 ssh_runner.go:195] Run: rm -f paused
	I1207 23:37:59.253030  684670 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:37:59.256120  684670 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8rwsj" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:59.260754  684670 pod_ready.go:94] pod "coredns-66bc5c9577-8rwsj" is "Ready"
	I1207 23:37:59.260779  684670 pod_ready.go:86] duration metric: took 4.635052ms for pod "coredns-66bc5c9577-8rwsj" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:59.262839  684670 pod_ready.go:83] waiting for pod "etcd-kindnet-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:59.266580  684670 pod_ready.go:94] pod "etcd-kindnet-600852" is "Ready"
	I1207 23:37:59.266603  684670 pod_ready.go:86] duration metric: took 3.743047ms for pod "etcd-kindnet-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:59.268578  684670 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:59.272152  684670 pod_ready.go:94] pod "kube-apiserver-kindnet-600852" is "Ready"
	I1207 23:37:59.272172  684670 pod_ready.go:86] duration metric: took 3.574542ms for pod "kube-apiserver-kindnet-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:59.273892  684670 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:59.658282  684670 pod_ready.go:94] pod "kube-controller-manager-kindnet-600852" is "Ready"
	I1207 23:37:59.658312  684670 pod_ready.go:86] duration metric: took 384.397806ms for pod "kube-controller-manager-kindnet-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:59.857388  684670 pod_ready.go:83] waiting for pod "kube-proxy-nmxm2" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:38:00.256701  684670 pod_ready.go:94] pod "kube-proxy-nmxm2" is "Ready"
	I1207 23:38:00.256732  684670 pod_ready.go:86] duration metric: took 399.315784ms for pod "kube-proxy-nmxm2" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:38:00.457202  684670 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:38:00.857454  684670 pod_ready.go:94] pod "kube-scheduler-kindnet-600852" is "Ready"
	I1207 23:38:00.857489  684670 pod_ready.go:86] duration metric: took 400.253299ms for pod "kube-scheduler-kindnet-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:38:00.857505  684670 pod_ready.go:40] duration metric: took 1.604447924s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:38:00.927724  684670 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1207 23:38:00.929471  684670 out.go:179] * Done! kubectl is now configured to use "kindnet-600852" cluster and "default" namespace by default
	I1207 23:38:00.519526  697240 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:38:00.519553  697240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 23:38:00.519624  697240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-600852
	I1207 23:38:00.545899  697240 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 23:38:00.546113  697240 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 23:38:00.546194  697240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-600852
	I1207 23:38:00.552833  697240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/custom-flannel-600852/id_rsa Username:docker}
	I1207 23:38:00.575061  697240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/custom-flannel-600852/id_rsa Username:docker}
	I1207 23:38:00.602243  697240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 23:38:00.660619  697240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:38:00.680961  697240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:38:00.701930  697240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 23:38:00.864755  697240 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1207 23:38:00.867777  697240 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-600852" to be "Ready" ...
	I1207 23:38:01.119300  697240 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1207 23:38:00.429434  697202 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1207 23:38:00.429461  697202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329943 bytes)
	I1207 23:38:00.445271  697202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1207 23:38:01.472849  697202 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.027539027s)
	I1207 23:38:01.472924  697202 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 23:38:01.473006  697202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:38:01.473006  697202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-600852 minikube.k8s.io/updated_at=2025_12_07T23_38_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47 minikube.k8s.io/name=calico-600852 minikube.k8s.io/primary=true
	I1207 23:38:01.482625  697202 ops.go:34] apiserver oom_adj: -16
	I1207 23:38:01.545076  697202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:38:02.045428  697202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:38:02.545932  697202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:38:03.045228  697202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:38:03.545668  697202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:38:04.045744  697202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:38:04.546072  697202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:38:04.627386  697202 kubeadm.go:1114] duration metric: took 3.154450039s to wait for elevateKubeSystemPrivileges
	I1207 23:38:04.627434  697202 kubeadm.go:403] duration metric: took 14.638878278s to StartCluster
	I1207 23:38:04.627468  697202 settings.go:142] acquiring lock: {Name:mk372e79badb9c8f25216fa891cff6dfa96ea2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:38:04.627559  697202 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:38:04.629712  697202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:38:04.630019  697202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 23:38:04.630034  697202 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:38:04.630141  697202 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1207 23:38:04.630241  697202 addons.go:70] Setting storage-provisioner=true in profile "calico-600852"
	I1207 23:38:04.630262  697202 addons.go:239] Setting addon storage-provisioner=true in "calico-600852"
	I1207 23:38:04.630262  697202 addons.go:70] Setting default-storageclass=true in profile "calico-600852"
	I1207 23:38:04.630283  697202 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-600852"
	I1207 23:38:04.630296  697202 host.go:66] Checking if "calico-600852" exists ...
	I1207 23:38:04.630712  697202 cli_runner.go:164] Run: docker container inspect calico-600852 --format={{.State.Status}}
	I1207 23:38:04.630911  697202 cli_runner.go:164] Run: docker container inspect calico-600852 --format={{.State.Status}}
	I1207 23:38:04.630953  697202 config.go:182] Loaded profile config "calico-600852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:38:04.633011  697202 out.go:179] * Verifying Kubernetes components...
	I1207 23:38:04.634261  697202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:38:04.661277  697202 addons.go:239] Setting addon default-storageclass=true in "calico-600852"
	I1207 23:38:04.661336  697202 host.go:66] Checking if "calico-600852" exists ...
	I1207 23:38:04.661838  697202 cli_runner.go:164] Run: docker container inspect calico-600852 --format={{.State.Status}}
	I1207 23:38:04.665433  697202 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 23:38:04.667473  697202 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:38:04.667495  697202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 23:38:04.667561  697202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-600852
	I1207 23:38:04.696447  697202 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 23:38:04.696474  697202 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 23:38:04.696633  697202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-600852
	I1207 23:38:04.704363  697202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/calico-600852/id_rsa Username:docker}
	I1207 23:38:04.729350  697202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/calico-600852/id_rsa Username:docker}
	I1207 23:38:04.760949  697202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 23:38:04.835500  697202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:38:04.857304  697202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:38:04.885364  697202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 23:38:05.074233  697202 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1207 23:38:05.076126  697202 node_ready.go:35] waiting up to 15m0s for node "calico-600852" to be "Ready" ...
	I1207 23:38:05.395523  697202 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1207 23:38:01.121548  697240 addons.go:530] duration metric: took 636.135374ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1207 23:38:01.369973  697240 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-600852" context rescaled to 1 replicas
	W1207 23:38:02.871108  697240 node_ready.go:57] node "custom-flannel-600852" has "Ready":"False" status (will retry)
	W1207 23:38:04.876490  697240 node_ready.go:57] node "custom-flannel-600852" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 07 23:37:25 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:25.5316673Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 07 23:37:25 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:25.536425089Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 07 23:37:25 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:25.536457627Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 07 23:37:37 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:37.708029594Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=584dca10-c7b5-4712-a9ae-7af36b03f00c name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:37:37 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:37.710944905Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5973ec6b-5ee8-4d69-94a6-cfb4b1e0a76d name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:37:37 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:37.714272384Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2qmt/dashboard-metrics-scraper" id=a8bde38e-2572-4e21-b53c-ddcd79685cdb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:37:37 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:37.714452342Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:37 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:37.723144759Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:37 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:37.723661297Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:37 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:37.763502533Z" level=info msg="Created container 97a5b2897354b4d5337d92f0bb24a680df6f27de664ccfb0f4e72604947f4e42: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2qmt/dashboard-metrics-scraper" id=a8bde38e-2572-4e21-b53c-ddcd79685cdb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:37:37 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:37.764442981Z" level=info msg="Starting container: 97a5b2897354b4d5337d92f0bb24a680df6f27de664ccfb0f4e72604947f4e42" id=b386a165-c253-41dd-a76c-a8b9608c5427 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:37:37 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:37.767119416Z" level=info msg="Started container" PID=1771 containerID=97a5b2897354b4d5337d92f0bb24a680df6f27de664ccfb0f4e72604947f4e42 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2qmt/dashboard-metrics-scraper id=b386a165-c253-41dd-a76c-a8b9608c5427 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f10901483b03ef3a341449437aabf6b005d605472b49a06f6776d08aaaf33d7d
	Dec 07 23:37:37 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:37.814445032Z" level=info msg="Removing container: 3efe7df9fe00bad6c4287136d3c2c464b8278703353f2ab4ceeec6f81df30d21" id=ab9e3595-7450-4b77-a9c9-fc02486dbb81 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 07 23:37:37 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:37.827658716Z" level=info msg="Removed container 3efe7df9fe00bad6c4287136d3c2c464b8278703353f2ab4ceeec6f81df30d21: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2qmt/dashboard-metrics-scraper" id=ab9e3595-7450-4b77-a9c9-fc02486dbb81 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 07 23:37:45 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:45.83695981Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=afc64b4c-9034-4557-a979-51ebb52d7441 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:37:45 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:45.837982762Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=77d56115-de8e-423f-a0cb-320dd9e77553 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:37:45 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:45.839074822Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=bf4775b2-0888-481d-b47c-9b102d975fb1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:37:45 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:45.839215044Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:45 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:45.844614736Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:45 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:45.844817298Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/bf221568ff80d2378228bc4a14119dc06590041f1740374c704bc029478880ac/merged/etc/passwd: no such file or directory"
	Dec 07 23:37:45 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:45.8448541Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/bf221568ff80d2378228bc4a14119dc06590041f1740374c704bc029478880ac/merged/etc/group: no such file or directory"
	Dec 07 23:37:45 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:45.845153746Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:45 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:45.87669398Z" level=info msg="Created container 058865ddda268775bdf21f4e133779ac38c262c9ded903bf758c68c656ba4b37: kube-system/storage-provisioner/storage-provisioner" id=bf4775b2-0888-481d-b47c-9b102d975fb1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:37:45 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:45.87740659Z" level=info msg="Starting container: 058865ddda268775bdf21f4e133779ac38c262c9ded903bf758c68c656ba4b37" id=331bdfc9-8a64-4939-9149-df11951271c0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:37:45 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:45.879412295Z" level=info msg="Started container" PID=1785 containerID=058865ddda268775bdf21f4e133779ac38c262c9ded903bf758c68c656ba4b37 description=kube-system/storage-provisioner/storage-provisioner id=331bdfc9-8a64-4939-9149-df11951271c0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1cc2a364fc405aa25bc4b6ba5d1d291a8384751748807ea72fcd5ef6b9803965
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	058865ddda268       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   1cc2a364fc405       storage-provisioner                                    kube-system
	97a5b2897354b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago      Exited              dashboard-metrics-scraper   2                   f10901483b03e       dashboard-metrics-scraper-6ffb444bf9-l2qmt             kubernetes-dashboard
	d0dece358b07a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   2a694ad6924b0       kubernetes-dashboard-855c9754f9-x7hx7                  kubernetes-dashboard
	4e915a09b78e0       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   f18c93ac1698b       busybox                                                default
	8eb4661f40adb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   34834d5f4639a       coredns-66bc5c9577-p4v2f                               kube-system
	ae571d49269c9       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           53 seconds ago      Running             kube-proxy                  0                   b1ad43600cd73       kube-proxy-7stg5                                       kube-system
	1141bc53141e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   1cc2a364fc405       storage-provisioner                                    kube-system
	03d7391848685       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   705cd5cd1c701       kindnet-55xbl                                          kube-system
	362b83f015210       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           55 seconds ago      Running             kube-scheduler              0                   e357e5f1e3cb6       kube-scheduler-default-k8s-diff-port-312944            kube-system
	fa639c7294ee1       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           55 seconds ago      Running             kube-controller-manager     0                   9beb065dece42       kube-controller-manager-default-k8s-diff-port-312944   kube-system
	b04410a9187c7       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           55 seconds ago      Running             kube-apiserver              0                   64f04b32bfd74       kube-apiserver-default-k8s-diff-port-312944            kube-system
	f27c08f4d2ee8       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           55 seconds ago      Running             etcd                        0                   26acf5ba3f8e7       etcd-default-k8s-diff-port-312944                      kube-system
	
	
	==> coredns [8eb4661f40adb7e3bc509b1d373b2ad35becf93ce0d8b257ae68088048cea1a3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42156 - 52497 "HINFO IN 9139562123407335876.5391113358729451137. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029737409s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-312944
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-312944
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=default-k8s-diff-port-312944
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_36_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:36:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-312944
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:37:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:37:45 +0000   Sun, 07 Dec 2025 23:36:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:37:45 +0000   Sun, 07 Dec 2025 23:36:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:37:45 +0000   Sun, 07 Dec 2025 23:36:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:37:45 +0000   Sun, 07 Dec 2025 23:36:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-312944
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                bd0038bf-5fca-4fcf-bfc4-04aff0b70aa3
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-p4v2f                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-default-k8s-diff-port-312944                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-55xbl                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-default-k8s-diff-port-312944             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-312944    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-7stg5                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-default-k8s-diff-port-312944             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-l2qmt              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-x7hx7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 108s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  119s (x8 over 119s)  kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s (x8 over 119s)  kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s (x8 over 119s)  kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    115s                 kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  115s                 kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     115s                 kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s                 node-controller  Node default-k8s-diff-port-312944 event: Registered Node default-k8s-diff-port-312944 in Controller
	  Normal  NodeReady                98s                  kubelet          Node default-k8s-diff-port-312944 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                  node-controller  Node default-k8s-diff-port-312944 event: Registered Node default-k8s-diff-port-312944 in Controller
	
	
	==> dmesg <==
	[  +0.006319] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.495443] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006323] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494714] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006745] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494455] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007157] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493953] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007413] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493695] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007143] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493798] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007702] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493076] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008458] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493060] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008891] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492811] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007996] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493243] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008588] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492559] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008931] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.491699] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.010378] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	
	
	==> etcd [f27c08f4d2ee8d8898a367bb16db44c1f22130d15e95d71881aa776e8567269c] <==
	{"level":"warn","ts":"2025-12-07T23:37:13.629818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.639593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.649772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.658617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.669763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.679396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.688577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.697365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.707600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.716215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.724480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.732901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.740894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.748408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.756034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.763190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.770597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.778408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.784967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.792549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.812829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.821142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.830414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.892600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:40.889271Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.602571ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766684848834146 > lease_revoke:<id:5b339afb2d2945da>","response":"size:29"}
	
	
	==> kernel <==
	 23:38:08 up  2:20,  0 user,  load average: 3.15, 2.79, 2.10
	Linux default-k8s-diff-port-312944 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [03d7391848685b4e4adc0e0cbeb5a8f00b9ca0ce5cf2a95d3e89a3e413264d20] <==
	I1207 23:37:15.304805       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 23:37:15.305110       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1207 23:37:15.305281       1 main.go:148] setting mtu 1500 for CNI 
	I1207 23:37:15.305295       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 23:37:15.305314       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T23:37:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 23:37:15.508356       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 23:37:15.664693       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 23:37:15.664761       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 23:37:15.703181       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 23:37:16.065096       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 23:37:16.065126       1 metrics.go:72] Registering metrics
	I1207 23:37:16.065219       1 controller.go:711] "Syncing nftables rules"
	I1207 23:37:25.509108       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:37:25.509175       1 main.go:301] handling current node
	I1207 23:37:35.513026       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:37:35.513074       1 main.go:301] handling current node
	I1207 23:37:45.509257       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:37:45.509301       1 main.go:301] handling current node
	I1207 23:37:55.511418       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:37:55.511457       1 main.go:301] handling current node
	I1207 23:38:05.515141       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:38:05.515177       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b04410a9187c7167576fa7f9cb5bf5a761981c61b37ea3b68eb353c721baab8f] <==
	I1207 23:37:14.402469       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1207 23:37:14.405407       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1207 23:37:14.406560       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1207 23:37:14.406584       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1207 23:37:14.408473       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1207 23:37:14.408954       1 policy_source.go:240] refreshing policies
	I1207 23:37:14.406526       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1207 23:37:14.406610       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1207 23:37:14.410127       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1207 23:37:14.428149       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1207 23:37:14.433030       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1207 23:37:14.435480       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 23:37:14.739718       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 23:37:14.740040       1 controller.go:667] quota admission added evaluator for: namespaces
	I1207 23:37:14.777775       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 23:37:14.800445       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 23:37:14.809207       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 23:37:14.855242       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.127.225"}
	I1207 23:37:14.867448       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.190.220"}
	I1207 23:37:15.304529       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1207 23:37:17.895015       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 23:37:17.895068       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 23:37:18.243728       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:37:18.243728       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:37:18.494131       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [fa639c7294ee1af933ce6c68db15470c1c2d5d2c404c5e0568eaac61e7ede373] <==
	I1207 23:37:17.855712       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-312944"
	I1207 23:37:17.855767       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1207 23:37:17.861432       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1207 23:37:17.861437       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1207 23:37:17.863842       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1207 23:37:17.865888       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1207 23:37:17.867940       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1207 23:37:17.869465       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1207 23:37:17.871688       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1207 23:37:17.890403       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1207 23:37:17.890433       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1207 23:37:17.890442       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1207 23:37:17.890449       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1207 23:37:17.891646       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1207 23:37:17.891682       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1207 23:37:17.891702       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1207 23:37:17.891781       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1207 23:37:17.891874       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1207 23:37:17.891931       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1207 23:37:17.893689       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1207 23:37:17.897023       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 23:37:17.915251       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1207 23:37:17.918928       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1207 23:37:17.918946       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1207 23:37:17.918958       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [ae571d49269c915740fb2cf23f9df93b135ad116f7f7e358c4a59ecfac859a14] <==
	I1207 23:37:15.133164       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:37:15.213255       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 23:37:15.314181       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 23:37:15.314225       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1207 23:37:15.314345       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:37:15.336886       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:37:15.336955       1 server_linux.go:132] "Using iptables Proxier"
	I1207 23:37:15.342948       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:37:15.343445       1 server.go:527] "Version info" version="v1.34.2"
	I1207 23:37:15.343472       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:37:15.347446       1 config.go:309] "Starting node config controller"
	I1207 23:37:15.347470       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:37:15.347492       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:37:15.347506       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:37:15.347544       1 config.go:200] "Starting service config controller"
	I1207 23:37:15.347550       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:37:15.347572       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:37:15.347577       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:37:15.347605       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:37:15.447704       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 23:37:15.447730       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 23:37:15.447727       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [362b83f015210f03925637b1b0598b825d674607d060c054cf459ff6794854a5] <==
	I1207 23:37:13.029000       1 serving.go:386] Generated self-signed cert in-memory
	W1207 23:37:14.338760       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 23:37:14.338899       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 23:37:14.338912       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 23:37:14.338922       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 23:37:14.373269       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1207 23:37:14.373301       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:37:14.376902       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:37:14.377569       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:37:14.377274       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 23:37:14.377298       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 23:37:14.479851       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 07 23:37:18 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:18.456895     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a46c4c27-7f70-49e5-9552-52151b217b5d-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-l2qmt\" (UID: \"a46c4c27-7f70-49e5-9552-52151b217b5d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2qmt"
	Dec 07 23:37:18 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:18.456924     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xcp9\" (UniqueName: \"kubernetes.io/projected/8ab1a416-3cea-4d56-8a53-4645de22a61d-kube-api-access-2xcp9\") pod \"kubernetes-dashboard-855c9754f9-x7hx7\" (UID: \"8ab1a416-3cea-4d56-8a53-4645de22a61d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-x7hx7"
	Dec 07 23:37:21 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:21.088289     730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 07 23:37:21 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:21.760932     730 scope.go:117] "RemoveContainer" containerID="3719bbfe635f807e31451e426c963e5cf8bc57605981d2cb4d4386eac693256f"
	Dec 07 23:37:22 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:22.766550     730 scope.go:117] "RemoveContainer" containerID="3efe7df9fe00bad6c4287136d3c2c464b8278703353f2ab4ceeec6f81df30d21"
	Dec 07 23:37:22 default-k8s-diff-port-312944 kubelet[730]: E1207 23:37:22.766728     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l2qmt_kubernetes-dashboard(a46c4c27-7f70-49e5-9552-52151b217b5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2qmt" podUID="a46c4c27-7f70-49e5-9552-52151b217b5d"
	Dec 07 23:37:22 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:22.767016     730 scope.go:117] "RemoveContainer" containerID="3719bbfe635f807e31451e426c963e5cf8bc57605981d2cb4d4386eac693256f"
	Dec 07 23:37:23 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:23.772256     730 scope.go:117] "RemoveContainer" containerID="3efe7df9fe00bad6c4287136d3c2c464b8278703353f2ab4ceeec6f81df30d21"
	Dec 07 23:37:23 default-k8s-diff-port-312944 kubelet[730]: E1207 23:37:23.772482     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l2qmt_kubernetes-dashboard(a46c4c27-7f70-49e5-9552-52151b217b5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2qmt" podUID="a46c4c27-7f70-49e5-9552-52151b217b5d"
	Dec 07 23:37:24 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:24.775404     730 scope.go:117] "RemoveContainer" containerID="3efe7df9fe00bad6c4287136d3c2c464b8278703353f2ab4ceeec6f81df30d21"
	Dec 07 23:37:24 default-k8s-diff-port-312944 kubelet[730]: E1207 23:37:24.775641     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l2qmt_kubernetes-dashboard(a46c4c27-7f70-49e5-9552-52151b217b5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2qmt" podUID="a46c4c27-7f70-49e5-9552-52151b217b5d"
	Dec 07 23:37:29 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:29.144916     730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-x7hx7" podStartSLOduration=4.922350406 podStartE2EDuration="11.14485631s" podCreationTimestamp="2025-12-07 23:37:18 +0000 UTC" firstStartedPulling="2025-12-07 23:37:18.687110658 +0000 UTC m=+7.077964335" lastFinishedPulling="2025-12-07 23:37:24.909616573 +0000 UTC m=+13.300470239" observedRunningTime="2025-12-07 23:37:25.792743777 +0000 UTC m=+14.183597466" watchObservedRunningTime="2025-12-07 23:37:29.14485631 +0000 UTC m=+17.535709996"
	Dec 07 23:37:37 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:37.707518     730 scope.go:117] "RemoveContainer" containerID="3efe7df9fe00bad6c4287136d3c2c464b8278703353f2ab4ceeec6f81df30d21"
	Dec 07 23:37:37 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:37.812418     730 scope.go:117] "RemoveContainer" containerID="3efe7df9fe00bad6c4287136d3c2c464b8278703353f2ab4ceeec6f81df30d21"
	Dec 07 23:37:37 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:37.812694     730 scope.go:117] "RemoveContainer" containerID="97a5b2897354b4d5337d92f0bb24a680df6f27de664ccfb0f4e72604947f4e42"
	Dec 07 23:37:37 default-k8s-diff-port-312944 kubelet[730]: E1207 23:37:37.812930     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l2qmt_kubernetes-dashboard(a46c4c27-7f70-49e5-9552-52151b217b5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2qmt" podUID="a46c4c27-7f70-49e5-9552-52151b217b5d"
	Dec 07 23:37:42 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:42.795758     730 scope.go:117] "RemoveContainer" containerID="97a5b2897354b4d5337d92f0bb24a680df6f27de664ccfb0f4e72604947f4e42"
	Dec 07 23:37:42 default-k8s-diff-port-312944 kubelet[730]: E1207 23:37:42.796003     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l2qmt_kubernetes-dashboard(a46c4c27-7f70-49e5-9552-52151b217b5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2qmt" podUID="a46c4c27-7f70-49e5-9552-52151b217b5d"
	Dec 07 23:37:45 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:45.836529     730 scope.go:117] "RemoveContainer" containerID="1141bc53141e8e773858f382cacf8f035e2c792f49fad9bc151a5de36582d819"
	Dec 07 23:37:55 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:55.707316     730 scope.go:117] "RemoveContainer" containerID="97a5b2897354b4d5337d92f0bb24a680df6f27de664ccfb0f4e72604947f4e42"
	Dec 07 23:37:55 default-k8s-diff-port-312944 kubelet[730]: E1207 23:37:55.707615     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l2qmt_kubernetes-dashboard(a46c4c27-7f70-49e5-9552-52151b217b5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2qmt" podUID="a46c4c27-7f70-49e5-9552-52151b217b5d"
	Dec 07 23:38:05 default-k8s-diff-port-312944 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 07 23:38:05 default-k8s-diff-port-312944 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 07 23:38:05 default-k8s-diff-port-312944 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 07 23:38:05 default-k8s-diff-port-312944 systemd[1]: kubelet.service: Consumed 1.817s CPU time.
	
	
	==> kubernetes-dashboard [d0dece358b07ad46edbe28384e450be226ec46d5ce2446c6c96076c671ea49ad] <==
	2025/12/07 23:37:24 Starting overwatch
	2025/12/07 23:37:24 Using namespace: kubernetes-dashboard
	2025/12/07 23:37:24 Using in-cluster config to connect to apiserver
	2025/12/07 23:37:24 Using secret token for csrf signing
	2025/12/07 23:37:24 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/07 23:37:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/07 23:37:25 Successful initial request to the apiserver, version: v1.34.2
	2025/12/07 23:37:25 Generating JWE encryption key
	2025/12/07 23:37:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/07 23:37:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/07 23:37:25 Initializing JWE encryption key from synchronized object
	2025/12/07 23:37:25 Creating in-cluster Sidecar client
	2025/12/07 23:37:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/07 23:37:25 Serving insecurely on HTTP port: 9090
	2025/12/07 23:37:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [058865ddda268775bdf21f4e133779ac38c262c9ded903bf758c68c656ba4b37] <==
	I1207 23:37:45.894150       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 23:37:45.901393       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 23:37:45.901436       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1207 23:37:45.903638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:49.359127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:53.620511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:57.219456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:38:00.273023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:38:03.295044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:38:03.299756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 23:38:03.299940       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 23:38:03.300005       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8139ddd6-5276-4d69-8ef0-8cf0f6816009", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-312944_9bd176d2-f9df-496d-8723-a8ee1ef620ac became leader
	I1207 23:38:03.300120       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-312944_9bd176d2-f9df-496d-8723-a8ee1ef620ac!
	W1207 23:38:03.301859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:38:03.305076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 23:38:03.400407       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-312944_9bd176d2-f9df-496d-8723-a8ee1ef620ac!
	W1207 23:38:05.310056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:38:05.320683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:38:07.325563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:38:07.330579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [1141bc53141e8e773858f382cacf8f035e2c792f49fad9bc151a5de36582d819] <==
	I1207 23:37:15.095620       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1207 23:37:45.098622       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-312944 -n default-k8s-diff-port-312944
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-312944 -n default-k8s-diff-port-312944: exit status 2 (388.835944ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-312944 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-312944
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-312944:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "df4662170d3c8e92c5a6bf9174e1eb910dbfeaa1b35d09c598d8401172890e61",
	        "Created": "2025-12-07T23:35:53.17207692Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 687513,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-07T23:37:04.966230146Z",
	            "FinishedAt": "2025-12-07T23:37:04.007935147Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/df4662170d3c8e92c5a6bf9174e1eb910dbfeaa1b35d09c598d8401172890e61/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/df4662170d3c8e92c5a6bf9174e1eb910dbfeaa1b35d09c598d8401172890e61/hostname",
	        "HostsPath": "/var/lib/docker/containers/df4662170d3c8e92c5a6bf9174e1eb910dbfeaa1b35d09c598d8401172890e61/hosts",
	        "LogPath": "/var/lib/docker/containers/df4662170d3c8e92c5a6bf9174e1eb910dbfeaa1b35d09c598d8401172890e61/df4662170d3c8e92c5a6bf9174e1eb910dbfeaa1b35d09c598d8401172890e61-json.log",
	        "Name": "/default-k8s-diff-port-312944",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-312944:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-312944",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "df4662170d3c8e92c5a6bf9174e1eb910dbfeaa1b35d09c598d8401172890e61",
	                "LowerDir": "/var/lib/docker/overlay2/0118ae1fd177a027d3c4130ba6cb419228d15d23a753279249b22be530579070-init/diff:/var/lib/docker/overlay2/d2e9c5481c0f5ed3745e4b3c85b207e8e3f273f5a1d285f7bc7bfa20976ad16e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0118ae1fd177a027d3c4130ba6cb419228d15d23a753279249b22be530579070/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0118ae1fd177a027d3c4130ba6cb419228d15d23a753279249b22be530579070/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0118ae1fd177a027d3c4130ba6cb419228d15d23a753279249b22be530579070/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-312944",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-312944/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-312944",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-312944",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-312944",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5f942a550c56dd9081abe1d3b1e36641c4925906b3582795c4fda0bbe2174dd8",
	            "SandboxKey": "/var/run/docker/netns/5f942a550c56",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33483"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33484"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33487"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33485"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33486"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-312944": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "217dc275cbc6467e058b35e68e0b1d3b5b2cb07cc2e90f33cf455ec5c147cec4",
	                    "EndpointID": "532627a0168cf10b204310218998e053bf627273757d970f30a2d61e2fa8843a",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "36:52:73:3c:63:a5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-312944",
	                        "df4662170d3c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-312944 -n default-k8s-diff-port-312944
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-312944 -n default-k8s-diff-port-312944: exit status 2 (375.492394ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-312944 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-312944 logs -n 25: (1.573243012s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-600852 sudo systemctl cat docker --no-pager                                                                                                                │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo cat /etc/docker/daemon.json                                                                                                                    │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │                     │
	│ ssh     │ -p auto-600852 sudo docker system info                                                                                                                             │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │                     │
	│ ssh     │ -p auto-600852 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │                     │
	│ ssh     │ -p auto-600852 sudo systemctl cat cri-docker --no-pager                                                                                                            │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │                     │
	│ ssh     │ -p auto-600852 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo cri-dockerd --version                                                                                                                          │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │                     │
	│ ssh     │ -p auto-600852 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo containerd config dump                                                                                                                         │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ delete  │ -p embed-certs-654118                                                                                                                                              │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ ssh     │ -p auto-600852 sudo crio config                                                                                                                                    │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ delete  │ -p auto-600852                                                                                                                                                     │ auto-600852                  │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ delete  │ -p embed-certs-654118                                                                                                                                              │ embed-certs-654118           │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │ 07 Dec 25 23:37 UTC │
	│ start   │ -p calico-600852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                             │ calico-600852                │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │                     │
	│ start   │ -p custom-flannel-600852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-600852        │ jenkins │ v1.37.0 │ 07 Dec 25 23:37 UTC │                     │
	│ image   │ default-k8s-diff-port-312944 image list --format=json                                                                                                              │ default-k8s-diff-port-312944 │ jenkins │ v1.37.0 │ 07 Dec 25 23:38 UTC │ 07 Dec 25 23:38 UTC │
	│ pause   │ -p default-k8s-diff-port-312944 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-312944 │ jenkins │ v1.37.0 │ 07 Dec 25 23:38 UTC │                     │
	│ ssh     │ -p kindnet-600852 pgrep -a kubelet                                                                                                                                 │ kindnet-600852               │ jenkins │ v1.37.0 │ 07 Dec 25 23:38 UTC │ 07 Dec 25 23:38 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 23:37:35
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 23:37:35.462168  697240 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:37:35.462277  697240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:37:35.462289  697240 out.go:374] Setting ErrFile to fd 2...
	I1207 23:37:35.462294  697240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:37:35.462540  697240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:37:35.463172  697240 out.go:368] Setting JSON to false
	I1207 23:37:35.464794  697240 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8399,"bootTime":1765142256,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:37:35.464880  697240 start.go:143] virtualization: kvm guest
	I1207 23:37:35.466843  697240 out.go:179] * [custom-flannel-600852] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:37:35.468251  697240 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:37:35.468287  697240 notify.go:221] Checking for updates...
	I1207 23:37:35.471267  697240 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:37:35.472792  697240 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:37:35.473878  697240 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:37:35.475283  697240 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:37:35.476465  697240 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:37:35.400195  697202 config.go:182] Loaded profile config "default-k8s-diff-port-312944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:37:35.400353  697202 config.go:182] Loaded profile config "kindnet-600852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:37:35.400514  697202 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:37:35.429288  697202 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:37:35.429477  697202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:37:35.494816  697202 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-07 23:37:35.48406654 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:37:35.494929  697202 docker.go:319] overlay module found
	I1207 23:37:35.497562  697202 out.go:179] * Using the docker driver based on user configuration
	I1207 23:37:35.478098  697240 config.go:182] Loaded profile config "default-k8s-diff-port-312944": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:37:35.478226  697240 config.go:182] Loaded profile config "kindnet-600852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:37:35.478393  697240 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:37:35.505909  697240 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:37:35.506077  697240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:37:35.571510  697240 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-07 23:37:35.560842683 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:37:35.571614  697240 docker.go:319] overlay module found
	I1207 23:37:35.498843  697202 start.go:309] selected driver: docker
	I1207 23:37:35.498868  697202 start.go:927] validating driver "docker" against <nil>
	I1207 23:37:35.498886  697202 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:37:35.499584  697202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:37:35.571218  697202 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-07 23:37:35.560842683 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:37:35.571389  697202 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1207 23:37:35.571712  697202 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:37:35.573371  697202 out.go:179] * Using Docker driver with root privileges
	I1207 23:37:35.573376  697240 out.go:179] * Using the docker driver based on user configuration
	I1207 23:37:35.574608  697202 cni.go:84] Creating CNI manager for "calico"
	I1207 23:37:35.574627  697202 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1207 23:37:35.574707  697202 start.go:353] cluster config:
	{Name:calico-600852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:37:35.576216  697202 out.go:179] * Starting "calico-600852" primary control-plane node in "calico-600852" cluster
	I1207 23:37:35.577387  697202 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:37:35.578730  697202 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:37:35.579818  697202 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:37:35.579894  697202 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1207 23:37:35.579910  697202 cache.go:65] Caching tarball of preloaded images
	I1207 23:37:35.579944  697202 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:37:35.580081  697202 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:37:35.580105  697202 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1207 23:37:35.580254  697202 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/config.json ...
	I1207 23:37:35.580287  697202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/config.json: {Name:mkc3ab2518e2ac158485368a4283678c9e1aa504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:35.606391  697202 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:37:35.606419  697202 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:37:35.606441  697202 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:37:35.606479  697202 start.go:360] acquireMachinesLock for calico-600852: {Name:mk63843d0e955c4ef490e3f22aabe305d776f228 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:37:35.606600  697202 start.go:364] duration metric: took 97.96µs to acquireMachinesLock for "calico-600852"
	I1207 23:37:35.606633  697202 start.go:93] Provisioning new machine with config: &{Name:calico-600852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-600852 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:37:35.606730  697202 start.go:125] createHost starting for "" (driver="docker")
	I1207 23:37:35.574610  697240 start.go:309] selected driver: docker
	I1207 23:37:35.574626  697240 start.go:927] validating driver "docker" against <nil>
	I1207 23:37:35.574640  697240 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:37:35.575351  697240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:37:35.637046  697240 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:67 SystemTime:2025-12-07 23:37:35.626148888 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:37:35.637269  697240 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1207 23:37:35.637584  697240 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:37:35.639544  697240 out.go:179] * Using Docker driver with root privileges
	I1207 23:37:35.640617  697240 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1207 23:37:35.640657  697240 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1207 23:37:35.640764  697240 start.go:353] cluster config:
	{Name:custom-flannel-600852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:37:35.642270  697240 out.go:179] * Starting "custom-flannel-600852" primary control-plane node in "custom-flannel-600852" cluster
	I1207 23:37:35.643886  697240 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 23:37:35.645224  697240 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1207 23:37:35.646387  697240 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:37:35.646419  697240 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1207 23:37:35.646438  697240 cache.go:65] Caching tarball of preloaded images
	I1207 23:37:35.646478  697240 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 23:37:35.646540  697240 preload.go:238] Found /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1207 23:37:35.646556  697240 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1207 23:37:35.646682  697240 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/config.json ...
	I1207 23:37:35.646712  697240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/config.json: {Name:mk800147fe034f5238922fec66d596f6aa169033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:35.671849  697240 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1207 23:37:35.671880  697240 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1207 23:37:35.671904  697240 cache.go:243] Successfully downloaded all kic artifacts
	I1207 23:37:35.671944  697240 start.go:360] acquireMachinesLock for custom-flannel-600852: {Name:mk15b40cec96074cdc3d9121b669340a772a5a19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 23:37:35.672061  697240 start.go:364] duration metric: took 93.067µs to acquireMachinesLock for "custom-flannel-600852"
	I1207 23:37:35.672086  697240 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-600852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-600852 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:37:35.672186  697240 start.go:125] createHost starting for "" (driver="docker")
	W1207 23:37:34.456566  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:36.457410  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:36.533401  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	W1207 23:37:39.033103  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	I1207 23:37:35.608827  697202 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1207 23:37:35.609102  697202 start.go:159] libmachine.API.Create for "calico-600852" (driver="docker")
	I1207 23:37:35.609146  697202 client.go:173] LocalClient.Create starting
	I1207 23:37:35.609246  697202 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem
	I1207 23:37:35.609297  697202 main.go:143] libmachine: Decoding PEM data...
	I1207 23:37:35.609323  697202 main.go:143] libmachine: Parsing certificate...
	I1207 23:37:35.609414  697202 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem
	I1207 23:37:35.609442  697202 main.go:143] libmachine: Decoding PEM data...
	I1207 23:37:35.609462  697202 main.go:143] libmachine: Parsing certificate...
	I1207 23:37:35.609968  697202 cli_runner.go:164] Run: docker network inspect calico-600852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1207 23:37:35.629047  697202 cli_runner.go:211] docker network inspect calico-600852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1207 23:37:35.629121  697202 network_create.go:284] running [docker network inspect calico-600852] to gather additional debugging logs...
	I1207 23:37:35.629144  697202 cli_runner.go:164] Run: docker network inspect calico-600852
	W1207 23:37:35.648368  697202 cli_runner.go:211] docker network inspect calico-600852 returned with exit code 1
	I1207 23:37:35.648412  697202 network_create.go:287] error running [docker network inspect calico-600852]: docker network inspect calico-600852: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-600852 not found
	I1207 23:37:35.648437  697202 network_create.go:289] output of [docker network inspect calico-600852]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-600852 not found
	
	** /stderr **
	I1207 23:37:35.648580  697202 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:37:35.668431  697202 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-918c8f4f6e86 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:f0:02:fe:94:4b} reservation:<nil>}
	I1207 23:37:35.669359  697202 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ce07fb07c16c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:d2:35:46:a2:0a} reservation:<nil>}
	I1207 23:37:35.669895  697202 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f198eadca31e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:79:39:d6:10:dc} reservation:<nil>}
	I1207 23:37:35.670453  697202 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-2feb264898ec IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:86:57:43:7d:13:a7} reservation:<nil>}
	I1207 23:37:35.671461  697202 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e3a4b0}
	I1207 23:37:35.671497  697202 network_create.go:124] attempt to create docker network calico-600852 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1207 23:37:35.671563  697202 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-600852 calico-600852
	I1207 23:37:35.726680  697202 network_create.go:108] docker network calico-600852 192.168.85.0/24 created
	I1207 23:37:35.726710  697202 kic.go:121] calculated static IP "192.168.85.2" for the "calico-600852" container
	I1207 23:37:35.726800  697202 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1207 23:37:35.746383  697202 cli_runner.go:164] Run: docker volume create calico-600852 --label name.minikube.sigs.k8s.io=calico-600852 --label created_by.minikube.sigs.k8s.io=true
	I1207 23:37:35.767378  697202 oci.go:103] Successfully created a docker volume calico-600852
	I1207 23:37:35.767464  697202 cli_runner.go:164] Run: docker run --rm --name calico-600852-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-600852 --entrypoint /usr/bin/test -v calico-600852:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1207 23:37:36.214624  697202 oci.go:107] Successfully prepared a docker volume calico-600852
	I1207 23:37:36.214699  697202 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:37:36.214712  697202 kic.go:194] Starting extracting preloaded images to volume ...
	I1207 23:37:36.214806  697202 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-600852:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1207 23:37:35.674275  697240 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1207 23:37:35.674596  697240 start.go:159] libmachine.API.Create for "custom-flannel-600852" (driver="docker")
	I1207 23:37:35.674633  697240 client.go:173] LocalClient.Create starting
	I1207 23:37:35.674706  697240 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem
	I1207 23:37:35.674738  697240 main.go:143] libmachine: Decoding PEM data...
	I1207 23:37:35.674757  697240 main.go:143] libmachine: Parsing certificate...
	I1207 23:37:35.674823  697240 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem
	I1207 23:37:35.674848  697240 main.go:143] libmachine: Decoding PEM data...
	I1207 23:37:35.674858  697240 main.go:143] libmachine: Parsing certificate...
	I1207 23:37:35.675226  697240 cli_runner.go:164] Run: docker network inspect custom-flannel-600852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1207 23:37:35.694511  697240 cli_runner.go:211] docker network inspect custom-flannel-600852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1207 23:37:35.694611  697240 network_create.go:284] running [docker network inspect custom-flannel-600852] to gather additional debugging logs...
	I1207 23:37:35.694640  697240 cli_runner.go:164] Run: docker network inspect custom-flannel-600852
	W1207 23:37:35.714511  697240 cli_runner.go:211] docker network inspect custom-flannel-600852 returned with exit code 1
	I1207 23:37:35.714550  697240 network_create.go:287] error running [docker network inspect custom-flannel-600852]: docker network inspect custom-flannel-600852: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-600852 not found
	I1207 23:37:35.714572  697240 network_create.go:289] output of [docker network inspect custom-flannel-600852]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-600852 not found
	
	** /stderr **
	I1207 23:37:35.714707  697240 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:37:35.733767  697240 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-918c8f4f6e86 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:f0:02:fe:94:4b} reservation:<nil>}
	I1207 23:37:35.734778  697240 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ce07fb07c16c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:d2:35:46:a2:0a} reservation:<nil>}
	I1207 23:37:35.735418  697240 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f198eadca31e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:79:39:d6:10:dc} reservation:<nil>}
	I1207 23:37:35.736182  697240 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-2feb264898ec IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:86:57:43:7d:13:a7} reservation:<nil>}
	I1207 23:37:35.737085  697240 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-195088d2e9e3 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:7a:71:26:87:28:da} reservation:<nil>}
	I1207 23:37:35.737835  697240 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-217dc275cbc6 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:a2:b0:5a:0f:49:91} reservation:<nil>}
	I1207 23:37:35.738645  697240 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e3de60}
	I1207 23:37:35.738676  697240 network_create.go:124] attempt to create docker network custom-flannel-600852 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1207 23:37:35.738722  697240 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-600852 custom-flannel-600852
	I1207 23:37:35.791482  697240 network_create.go:108] docker network custom-flannel-600852 192.168.103.0/24 created
	I1207 23:37:35.791527  697240 kic.go:121] calculated static IP "192.168.103.2" for the "custom-flannel-600852" container
	I1207 23:37:35.791606  697240 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1207 23:37:35.813315  697240 cli_runner.go:164] Run: docker volume create custom-flannel-600852 --label name.minikube.sigs.k8s.io=custom-flannel-600852 --label created_by.minikube.sigs.k8s.io=true
	I1207 23:37:35.834796  697240 oci.go:103] Successfully created a docker volume custom-flannel-600852
	I1207 23:37:35.834882  697240 cli_runner.go:164] Run: docker run --rm --name custom-flannel-600852-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-600852 --entrypoint /usr/bin/test -v custom-flannel-600852:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1207 23:37:36.263573  697240 oci.go:107] Successfully prepared a docker volume custom-flannel-600852
	I1207 23:37:36.263657  697240 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:37:36.263673  697240 kic.go:194] Starting extracting preloaded images to volume ...
	I1207 23:37:36.263770  697240 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-600852:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	W1207 23:37:38.956943  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:40.990084  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:41.532651  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	W1207 23:37:44.032204  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	I1207 23:37:42.339866  697202 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-600852:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (6.125015735s)
	I1207 23:37:42.339900  697202 kic.go:203] duration metric: took 6.125183748s to extract preloaded images to volume ...
	W1207 23:37:42.340009  697202 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1207 23:37:42.340046  697202 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1207 23:37:42.340094  697202 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1207 23:37:42.404789  697202 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-600852 --name calico-600852 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-600852 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-600852 --network calico-600852 --ip 192.168.85.2 --volume calico-600852:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1207 23:37:42.823076  697202 cli_runner.go:164] Run: docker container inspect calico-600852 --format={{.State.Running}}
	I1207 23:37:42.844245  697202 cli_runner.go:164] Run: docker container inspect calico-600852 --format={{.State.Status}}
	I1207 23:37:42.869450  697202 cli_runner.go:164] Run: docker exec calico-600852 stat /var/lib/dpkg/alternatives/iptables
	I1207 23:37:42.927150  697202 oci.go:144] the created container "calico-600852" has a running status.
	I1207 23:37:42.927262  697202 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/calico-600852/id_rsa...
	I1207 23:37:42.993985  697202 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-389542/.minikube/machines/calico-600852/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1207 23:37:43.027596  697202 cli_runner.go:164] Run: docker container inspect calico-600852 --format={{.State.Status}}
	I1207 23:37:43.058350  697202 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1207 23:37:43.058376  697202 kic_runner.go:114] Args: [docker exec --privileged calico-600852 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1207 23:37:43.124558  697202 cli_runner.go:164] Run: docker container inspect calico-600852 --format={{.State.Status}}
	I1207 23:37:43.150157  697202 machine.go:94] provisionDockerMachine start ...
	I1207 23:37:43.150254  697202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-600852
	I1207 23:37:43.173856  697202 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:43.174237  697202 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1207 23:37:43.174284  697202 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:37:43.175136  697202 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59490->127.0.0.1:33493: read: connection reset by peer
	I1207 23:37:42.340199  697240 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-600852:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (6.076357797s)
	I1207 23:37:42.340228  697240 kic.go:203] duration metric: took 6.076550828s to extract preloaded images to volume ...
	W1207 23:37:42.340312  697240 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1207 23:37:42.340376  697240 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1207 23:37:42.340433  697240 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1207 23:37:42.404804  697240 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-600852 --name custom-flannel-600852 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-600852 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-600852 --network custom-flannel-600852 --ip 192.168.103.2 --volume custom-flannel-600852:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1207 23:37:42.706514  697240 cli_runner.go:164] Run: docker container inspect custom-flannel-600852 --format={{.State.Running}}
	I1207 23:37:42.725193  697240 cli_runner.go:164] Run: docker container inspect custom-flannel-600852 --format={{.State.Status}}
	I1207 23:37:42.746701  697240 cli_runner.go:164] Run: docker exec custom-flannel-600852 stat /var/lib/dpkg/alternatives/iptables
	I1207 23:37:42.796848  697240 oci.go:144] the created container "custom-flannel-600852" has a running status.
	I1207 23:37:42.796890  697240 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/custom-flannel-600852/id_rsa...
	I1207 23:37:42.928016  697240 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-389542/.minikube/machines/custom-flannel-600852/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1207 23:37:42.967592  697240 cli_runner.go:164] Run: docker container inspect custom-flannel-600852 --format={{.State.Status}}
	I1207 23:37:43.000865  697240 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1207 23:37:43.000893  697240 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-600852 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1207 23:37:43.067539  697240 cli_runner.go:164] Run: docker container inspect custom-flannel-600852 --format={{.State.Status}}
	I1207 23:37:43.094592  697240 machine.go:94] provisionDockerMachine start ...
	I1207 23:37:43.094686  697240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-600852
	I1207 23:37:43.124045  697240 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:43.125001  697240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1207 23:37:43.125028  697240 main.go:143] libmachine: About to run SSH command:
	hostname
	I1207 23:37:43.279170  697240 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-600852
	
	I1207 23:37:43.279198  697240 ubuntu.go:182] provisioning hostname "custom-flannel-600852"
	I1207 23:37:43.279267  697240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-600852
	I1207 23:37:43.301219  697240 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:43.301912  697240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1207 23:37:43.301982  697240 main.go:143] libmachine: About to run SSH command:
	sudo hostname custom-flannel-600852 && echo "custom-flannel-600852" | sudo tee /etc/hostname
	I1207 23:37:43.449441  697240 main.go:143] libmachine: SSH cmd err, output: <nil>: custom-flannel-600852
	
	I1207 23:37:43.449535  697240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-600852
	I1207 23:37:43.472066  697240 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:43.472395  697240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1207 23:37:43.472440  697240 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-600852' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-600852/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-600852' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:37:43.603212  697240 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:37:43.603253  697240 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:37:43.603290  697240 ubuntu.go:190] setting up certificates
	I1207 23:37:43.603306  697240 provision.go:84] configureAuth start
	I1207 23:37:43.603388  697240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-600852
	I1207 23:37:43.623316  697240 provision.go:143] copyHostCerts
	I1207 23:37:43.623426  697240 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:37:43.623446  697240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:37:43.623540  697240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:37:43.623668  697240 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:37:43.623680  697240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:37:43.623726  697240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:37:43.623860  697240 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:37:43.623877  697240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:37:43.623917  697240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:37:43.624024  697240 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-600852 san=[127.0.0.1 192.168.103.2 custom-flannel-600852 localhost minikube]
	I1207 23:37:43.702486  697240 provision.go:177] copyRemoteCerts
	I1207 23:37:43.702564  697240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:37:43.702613  697240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-600852
	I1207 23:37:43.721075  697240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/custom-flannel-600852/id_rsa Username:docker}
	I1207 23:37:43.816180  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:37:43.837404  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1207 23:37:43.855233  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 23:37:43.874256  697240 provision.go:87] duration metric: took 270.933131ms to configureAuth
	I1207 23:37:43.874285  697240 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:37:43.874488  697240 config.go:182] Loaded profile config "custom-flannel-600852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:37:43.874601  697240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-600852
	I1207 23:37:43.892175  697240 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:43.892426  697240 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1207 23:37:43.892445  697240 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:37:44.168099  697240 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:37:44.168124  697240 machine.go:97] duration metric: took 1.073506601s to provisionDockerMachine
	I1207 23:37:44.168137  697240 client.go:176] duration metric: took 8.493496154s to LocalClient.Create
	I1207 23:37:44.168161  697240 start.go:167] duration metric: took 8.49356644s to libmachine.API.Create "custom-flannel-600852"
	I1207 23:37:44.168171  697240 start.go:293] postStartSetup for "custom-flannel-600852" (driver="docker")
	I1207 23:37:44.168186  697240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:37:44.168251  697240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:37:44.168300  697240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-600852
	I1207 23:37:44.187533  697240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/custom-flannel-600852/id_rsa Username:docker}
	I1207 23:37:44.285708  697240 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:37:44.289278  697240 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:37:44.289311  697240 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:37:44.289345  697240 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:37:44.289418  697240 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:37:44.289571  697240 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:37:44.289703  697240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:37:44.297700  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:37:44.318296  697240 start.go:296] duration metric: took 150.110422ms for postStartSetup
	I1207 23:37:44.318665  697240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-600852
	I1207 23:37:44.336837  697240 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/config.json ...
	I1207 23:37:44.337147  697240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:37:44.337202  697240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-600852
	I1207 23:37:44.355307  697240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/custom-flannel-600852/id_rsa Username:docker}
	I1207 23:37:44.445690  697240 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:37:44.450318  697240 start.go:128] duration metric: took 8.778116889s to createHost
	I1207 23:37:44.450362  697240 start.go:83] releasing machines lock for "custom-flannel-600852", held for 8.778286664s
	I1207 23:37:44.450440  697240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-600852
	I1207 23:37:44.469538  697240 ssh_runner.go:195] Run: cat /version.json
	I1207 23:37:44.469584  697240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-600852
	I1207 23:37:44.469610  697240 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:37:44.469678  697240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-600852
	I1207 23:37:44.487668  697240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/custom-flannel-600852/id_rsa Username:docker}
	I1207 23:37:44.488859  697240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/custom-flannel-600852/id_rsa Username:docker}
	I1207 23:37:44.634686  697240 ssh_runner.go:195] Run: systemctl --version
	I1207 23:37:44.641623  697240 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:37:44.677161  697240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:37:44.681818  697240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:37:44.681886  697240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:37:44.708292  697240 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 23:37:44.708318  697240 start.go:496] detecting cgroup driver to use...
	I1207 23:37:44.708378  697240 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:37:44.708427  697240 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:37:44.725043  697240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:37:44.737747  697240 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:37:44.737811  697240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:37:44.754229  697240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:37:44.771728  697240 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:37:44.856910  697240 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:37:44.946607  697240 docker.go:234] disabling docker service ...
	I1207 23:37:44.946683  697240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:37:44.965739  697240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:37:44.978062  697240 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:37:45.065191  697240 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:37:45.152212  697240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:37:45.165142  697240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:37:45.179683  697240 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:37:45.179755  697240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:45.190176  697240 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:37:45.190240  697240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:45.199541  697240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:45.208651  697240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:45.217593  697240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:37:45.226681  697240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:45.236063  697240 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:45.250399  697240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:45.259658  697240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:37:45.267649  697240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:37:45.275381  697240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:37:45.355276  697240 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:37:45.510350  697240 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:37:45.510416  697240 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:37:45.514689  697240 start.go:564] Will wait 60s for crictl version
	I1207 23:37:45.514755  697240 ssh_runner.go:195] Run: which crictl
	I1207 23:37:45.518370  697240 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:37:45.545589  697240 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:37:45.545682  697240 ssh_runner.go:195] Run: crio --version
	I1207 23:37:45.574304  697240 ssh_runner.go:195] Run: crio --version
	I1207 23:37:45.604889  697240 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1207 23:37:45.606350  697240 cli_runner.go:164] Run: docker network inspect custom-flannel-600852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:37:45.625246  697240 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1207 23:37:45.629591  697240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:37:45.640950  697240 kubeadm.go:884] updating cluster {Name:custom-flannel-600852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-600852 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCore
DNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:37:45.641114  697240 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:37:45.641163  697240 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:37:45.675335  697240 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:37:45.675361  697240 crio.go:433] Images already preloaded, skipping extraction
	I1207 23:37:45.675409  697240 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:37:45.702412  697240 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:37:45.702437  697240 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:37:45.702447  697240 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.2 crio true true} ...
	I1207 23:37:45.702550  697240 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=custom-flannel-600852 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1207 23:37:45.702632  697240 ssh_runner.go:195] Run: crio config
	I1207 23:37:45.749943  697240 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1207 23:37:45.749986  697240 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 23:37:45.750007  697240 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-600852 NodeName:custom-flannel-600852 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:37:45.750119  697240 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-600852"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:37:45.750180  697240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:37:45.758753  697240 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:37:45.758837  697240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 23:37:45.767296  697240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1207 23:37:45.780180  697240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:37:45.795843  697240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1207 23:37:45.809254  697240 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1207 23:37:45.813068  697240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:37:45.823824  697240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:37:45.922810  697240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:37:45.954280  697240 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852 for IP: 192.168.103.2
	I1207 23:37:45.954305  697240 certs.go:195] generating shared ca certs ...
	I1207 23:37:45.954350  697240 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:45.954525  697240 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:37:45.954583  697240 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:37:45.954599  697240 certs.go:257] generating profile certs ...
	I1207 23:37:45.954671  697240 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/client.key
	I1207 23:37:45.954687  697240 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/client.crt with IP's: []
	I1207 23:37:46.026656  697240 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/client.crt ...
	I1207 23:37:46.026709  697240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/client.crt: {Name:mk8a9624c431cb6edf9711331cdf2043026fc87f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:46.026910  697240 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/client.key ...
	I1207 23:37:46.026929  697240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/client.key: {Name:mk32c7d69ac8b7e73f5693f03228f28056e7f2f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:46.027044  697240 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.key.835b6359
	I1207 23:37:46.027066  697240 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.crt.835b6359 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1207 23:37:46.119650  697240 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.crt.835b6359 ...
	I1207 23:37:46.119682  697240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.crt.835b6359: {Name:mk877482c89ea5c11c3d56ef01d2dd1d5ef365ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:46.119871  697240 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.key.835b6359 ...
	I1207 23:37:46.119894  697240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.key.835b6359: {Name:mked33bc8da818f53bc50f0f0e4ef36a5189fa9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:46.120005  697240 certs.go:382] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.crt.835b6359 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.crt
	I1207 23:37:46.120100  697240 certs.go:386] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.key.835b6359 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.key
	I1207 23:37:46.120186  697240 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/proxy-client.key
	I1207 23:37:46.120209  697240 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/proxy-client.crt with IP's: []
	I1207 23:37:46.177708  697240 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/proxy-client.crt ...
	I1207 23:37:46.177738  697240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/proxy-client.crt: {Name:mkd8e414040685141640fecdc73a2a45affca604 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:46.177898  697240 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/proxy-client.key ...
	I1207 23:37:46.177910  697240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/proxy-client.key: {Name:mk0f72864b00bc12cd813e39176c3793627ff229 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:46.178092  697240 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:37:46.178131  697240 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:37:46.178145  697240 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:37:46.178169  697240 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:37:46.178196  697240 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:37:46.178229  697240 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:37:46.178275  697240 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:37:46.178880  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:37:46.198288  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:37:46.217010  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:37:46.237272  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:37:46.258805  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1207 23:37:46.276861  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 23:37:46.294322  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:37:46.312873  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/custom-flannel-600852/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 23:37:46.332964  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:37:46.354112  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:37:46.373402  697240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:37:46.392365  697240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:37:46.406030  697240 ssh_runner.go:195] Run: openssl version
	I1207 23:37:46.412665  697240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:37:46.420869  697240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:37:46.429539  697240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:37:46.433803  697240 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:37:46.433871  697240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:37:46.472505  697240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:37:46.480520  697240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3931252.pem /etc/ssl/certs/3ec20f2e.0
	I1207 23:37:46.489387  697240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:46.497259  697240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:37:46.505422  697240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:46.509827  697240 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:46.509892  697240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:46.551231  697240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:37:46.559886  697240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1207 23:37:46.568529  697240 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:37:46.576298  697240 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:37:46.584235  697240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:37:46.588371  697240 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:37:46.588429  697240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:37:46.625148  697240 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:37:46.633370  697240 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/393125.pem /etc/ssl/certs/51391683.0
	I1207 23:37:46.642224  697240 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:37:46.646106  697240 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1207 23:37:46.646168  697240 kubeadm.go:401] StartCluster: {Name:custom-flannel-600852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:custom-flannel-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:37:46.646259  697240 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:37:46.646300  697240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:37:46.674079  697240 cri.go:89] found id: ""
	I1207 23:37:46.674144  697240 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:37:46.682721  697240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 23:37:46.691528  697240 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1207 23:37:46.691593  697240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 23:37:46.699787  697240 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 23:37:46.699811  697240 kubeadm.go:158] found existing configuration files:
	
	I1207 23:37:46.699862  697240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1207 23:37:46.708052  697240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1207 23:37:46.708115  697240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1207 23:37:46.715867  697240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1207 23:37:46.723726  697240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1207 23:37:46.723796  697240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1207 23:37:46.731434  697240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1207 23:37:46.739410  697240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1207 23:37:46.739464  697240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1207 23:37:46.747221  697240 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1207 23:37:46.755432  697240 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1207 23:37:46.755490  697240 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1207 23:37:46.763543  697240 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1207 23:37:46.806142  697240 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1207 23:37:46.806210  697240 kubeadm.go:319] [preflight] Running pre-flight checks
	I1207 23:37:46.827979  697240 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1207 23:37:46.828044  697240 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1207 23:37:46.828121  697240 kubeadm.go:319] OS: Linux
	I1207 23:37:46.828203  697240 kubeadm.go:319] CGROUPS_CPU: enabled
	I1207 23:37:46.828264  697240 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1207 23:37:46.828390  697240 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1207 23:37:46.828463  697240 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1207 23:37:46.828532  697240 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1207 23:37:46.828596  697240 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1207 23:37:46.828675  697240 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1207 23:37:46.828743  697240 kubeadm.go:319] CGROUPS_IO: enabled
	I1207 23:37:46.892537  697240 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 23:37:46.892692  697240 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 23:37:46.892875  697240 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1207 23:37:46.900999  697240 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1207 23:37:43.457256  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:45.957159  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	I1207 23:37:46.307553  697202 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-600852
	
	I1207 23:37:46.307583  697202 ubuntu.go:182] provisioning hostname "calico-600852"
	I1207 23:37:46.307668  697202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-600852
	I1207 23:37:46.327449  697202 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:46.327772  697202 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1207 23:37:46.327804  697202 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-600852 && echo "calico-600852" | sudo tee /etc/hostname
	I1207 23:37:46.468528  697202 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-600852
	
	I1207 23:37:46.468628  697202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-600852
	I1207 23:37:46.488224  697202 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:46.488531  697202 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1207 23:37:46.488550  697202 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-600852' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-600852/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-600852' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 23:37:46.620573  697202 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1207 23:37:46.620605  697202 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-389542/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-389542/.minikube}
	I1207 23:37:46.620634  697202 ubuntu.go:190] setting up certificates
	I1207 23:37:46.620647  697202 provision.go:84] configureAuth start
	I1207 23:37:46.620717  697202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-600852
	I1207 23:37:46.641379  697202 provision.go:143] copyHostCerts
	I1207 23:37:46.641462  697202 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem, removing ...
	I1207 23:37:46.641480  697202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem
	I1207 23:37:46.641550  697202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/ca.pem (1082 bytes)
	I1207 23:37:46.641663  697202 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem, removing ...
	I1207 23:37:46.641674  697202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem
	I1207 23:37:46.641713  697202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/cert.pem (1123 bytes)
	I1207 23:37:46.641806  697202 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem, removing ...
	I1207 23:37:46.641817  697202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem
	I1207 23:37:46.641853  697202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-389542/.minikube/key.pem (1675 bytes)
	I1207 23:37:46.641928  697202 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem org=jenkins.calico-600852 san=[127.0.0.1 192.168.85.2 calico-600852 localhost minikube]
	I1207 23:37:46.747911  697202 provision.go:177] copyRemoteCerts
	I1207 23:37:46.747964  697202 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 23:37:46.748001  697202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-600852
	I1207 23:37:46.769238  697202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/calico-600852/id_rsa Username:docker}
	I1207 23:37:46.867766  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1207 23:37:46.889480  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1207 23:37:46.909394  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 23:37:46.927052  697202 provision.go:87] duration metric: took 306.387989ms to configureAuth
	I1207 23:37:46.927083  697202 ubuntu.go:206] setting minikube options for container-runtime
	I1207 23:37:46.927305  697202 config.go:182] Loaded profile config "calico-600852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:37:46.927435  697202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-600852
	I1207 23:37:46.946235  697202 main.go:143] libmachine: Using SSH client type: native
	I1207 23:37:46.946559  697202 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1207 23:37:46.946589  697202 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1207 23:37:47.229624  697202 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1207 23:37:47.229674  697202 machine.go:97] duration metric: took 4.079494081s to provisionDockerMachine
	I1207 23:37:47.229686  697202 client.go:176] duration metric: took 11.620532329s to LocalClient.Create
	I1207 23:37:47.229702  697202 start.go:167] duration metric: took 11.62060605s to libmachine.API.Create "calico-600852"
	I1207 23:37:47.229712  697202 start.go:293] postStartSetup for "calico-600852" (driver="docker")
	I1207 23:37:47.229721  697202 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 23:37:47.229778  697202 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 23:37:47.229815  697202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-600852
	I1207 23:37:47.249083  697202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/calico-600852/id_rsa Username:docker}
	I1207 23:37:47.344551  697202 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 23:37:47.348117  697202 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 23:37:47.348146  697202 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1207 23:37:47.348158  697202 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/addons for local assets ...
	I1207 23:37:47.348222  697202 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-389542/.minikube/files for local assets ...
	I1207 23:37:47.348460  697202 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem -> 3931252.pem in /etc/ssl/certs
	I1207 23:37:47.348622  697202 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 23:37:47.356612  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:37:47.377303  697202 start.go:296] duration metric: took 147.575453ms for postStartSetup
	I1207 23:37:47.377718  697202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-600852
	I1207 23:37:47.396703  697202 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/config.json ...
	I1207 23:37:47.396965  697202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:37:47.397004  697202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-600852
	I1207 23:37:47.416235  697202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/calico-600852/id_rsa Username:docker}
	I1207 23:37:47.508442  697202 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 23:37:47.513073  697202 start.go:128] duration metric: took 11.90632642s to createHost
	I1207 23:37:47.513101  697202 start.go:83] releasing machines lock for "calico-600852", held for 11.906484487s
	I1207 23:37:47.513171  697202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-600852
	I1207 23:37:47.534994  697202 ssh_runner.go:195] Run: cat /version.json
	I1207 23:37:47.535047  697202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-600852
	I1207 23:37:47.535080  697202 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 23:37:47.535162  697202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-600852
	I1207 23:37:47.556062  697202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/calico-600852/id_rsa Username:docker}
	I1207 23:37:47.558419  697202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/calico-600852/id_rsa Username:docker}
	I1207 23:37:47.707730  697202 ssh_runner.go:195] Run: systemctl --version
	I1207 23:37:47.715115  697202 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1207 23:37:47.752265  697202 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 23:37:47.757322  697202 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 23:37:47.757405  697202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 23:37:47.784815  697202 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 23:37:47.784840  697202 start.go:496] detecting cgroup driver to use...
	I1207 23:37:47.784871  697202 detect.go:190] detected "systemd" cgroup driver on host os
	I1207 23:37:47.784919  697202 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 23:37:47.801532  697202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 23:37:47.814605  697202 docker.go:218] disabling cri-docker service (if available) ...
	I1207 23:37:47.814675  697202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1207 23:37:47.832123  697202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1207 23:37:47.851225  697202 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1207 23:37:47.938152  697202 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1207 23:37:48.033487  697202 docker.go:234] disabling docker service ...
	I1207 23:37:48.033552  697202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1207 23:37:48.053822  697202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1207 23:37:48.067528  697202 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1207 23:37:48.158030  697202 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1207 23:37:48.250048  697202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1207 23:37:48.265676  697202 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 23:37:48.281951  697202 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1207 23:37:48.282045  697202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:48.294128  697202 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1207 23:37:48.294197  697202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:48.304720  697202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:48.314523  697202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:48.324484  697202 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 23:37:48.333594  697202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:48.342651  697202 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:48.356711  697202 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1207 23:37:48.366809  697202 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 23:37:48.375853  697202 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 23:37:48.383607  697202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:37:48.480441  697202 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1207 23:37:48.621673  697202 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1207 23:37:48.621751  697202 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1207 23:37:48.626007  697202 start.go:564] Will wait 60s for crictl version
	I1207 23:37:48.626080  697202 ssh_runner.go:195] Run: which crictl
	I1207 23:37:48.629792  697202 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1207 23:37:48.656613  697202 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1207 23:37:48.656696  697202 ssh_runner.go:195] Run: crio --version
	I1207 23:37:48.688631  697202 ssh_runner.go:195] Run: crio --version
	I1207 23:37:48.721122  697202 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1207 23:37:46.032763  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	W1207 23:37:48.033350  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	I1207 23:37:46.903106  697240 out.go:252]   - Generating certificates and keys ...
	I1207 23:37:46.903221  697240 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1207 23:37:46.903346  697240 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1207 23:37:47.057702  697240 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 23:37:47.145711  697240 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1207 23:37:47.283656  697240 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1207 23:37:47.443811  697240 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1207 23:37:47.627502  697240 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1207 23:37:47.627687  697240 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-600852 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1207 23:37:47.741433  697240 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1207 23:37:47.741596  697240 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-600852 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1207 23:37:48.250407  697240 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 23:37:48.484094  697240 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 23:37:48.700165  697240 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1207 23:37:48.700264  697240 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 23:37:48.796665  697240 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 23:37:48.985875  697240 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 23:37:49.253932  697240 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 23:37:49.612996  697240 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 23:37:49.806151  697240 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 23:37:49.806958  697240 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 23:37:49.812529  697240 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 23:37:48.722395  697202 cli_runner.go:164] Run: docker network inspect calico-600852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 23:37:48.741546  697202 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1207 23:37:48.746060  697202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:37:48.757152  697202 kubeadm.go:884] updating cluster {Name:calico-600852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1207 23:37:48.757291  697202 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 23:37:48.757402  697202 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:37:48.794158  697202 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:37:48.794184  697202 crio.go:433] Images already preloaded, skipping extraction
	I1207 23:37:48.794238  697202 ssh_runner.go:195] Run: sudo crictl images --output json
	I1207 23:37:48.822171  697202 crio.go:514] all images are preloaded for cri-o runtime.
	I1207 23:37:48.822198  697202 cache_images.go:86] Images are preloaded, skipping loading
	I1207 23:37:48.822209  697202 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1207 23:37:48.822350  697202 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-600852 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:calico-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1207 23:37:48.822434  697202 ssh_runner.go:195] Run: crio config
	I1207 23:37:48.883878  697202 cni.go:84] Creating CNI manager for "calico"
	I1207 23:37:48.883916  697202 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1207 23:37:48.883939  697202 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-600852 NodeName:calico-600852 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 23:37:48.884071  697202 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-600852"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 23:37:48.884151  697202 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1207 23:37:48.893454  697202 binaries.go:51] Found k8s binaries, skipping transfer
	I1207 23:37:48.893533  697202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 23:37:48.902242  697202 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1207 23:37:48.915810  697202 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 23:37:48.933015  697202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1207 23:37:48.946732  697202 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1207 23:37:48.950652  697202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 23:37:48.961546  697202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:37:49.058120  697202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:37:49.089310  697202 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852 for IP: 192.168.85.2
	I1207 23:37:49.089345  697202 certs.go:195] generating shared ca certs ...
	I1207 23:37:49.089375  697202 certs.go:227] acquiring lock for ca certs: {Name:mk14fdbff0e404e9cf682ad11354efb5c5f5a778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:49.089643  697202 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key
	I1207 23:37:49.089703  697202 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key
	I1207 23:37:49.089716  697202 certs.go:257] generating profile certs ...
	I1207 23:37:49.089787  697202 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/client.key
	I1207 23:37:49.089803  697202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/client.crt with IP's: []
	I1207 23:37:49.315583  697202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/client.crt ...
	I1207 23:37:49.315613  697202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/client.crt: {Name:mk9246e4e51936452e13c158ca3debae4b8fa078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:49.315809  697202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/client.key ...
	I1207 23:37:49.315829  697202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/client.key: {Name:mk41c8ae6d14eb827fe4a8440f28a3f158fd7879 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:49.315965  697202 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.key.bc22f359
	I1207 23:37:49.315983  697202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.crt.bc22f359 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1207 23:37:49.439672  697202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.crt.bc22f359 ...
	I1207 23:37:49.439705  697202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.crt.bc22f359: {Name:mkc553384c69fb61ba71740d3335de3cab4fd14c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:49.439893  697202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.key.bc22f359 ...
	I1207 23:37:49.439907  697202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.key.bc22f359: {Name:mkc2e1d0bfa6b237c0b447d1a8825119b2d2ef05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:49.439979  697202 certs.go:382] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.crt.bc22f359 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.crt
	I1207 23:37:49.440076  697202 certs.go:386] copying /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.key.bc22f359 -> /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.key
	I1207 23:37:49.440155  697202 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/proxy-client.key
	I1207 23:37:49.440172  697202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/proxy-client.crt with IP's: []
	I1207 23:37:49.514150  697202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/proxy-client.crt ...
	I1207 23:37:49.514178  697202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/proxy-client.crt: {Name:mk97b1a455a2b9bfb030964cd6977408a79040a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:49.514368  697202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/proxy-client.key ...
	I1207 23:37:49.514384  697202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/proxy-client.key: {Name:mkf9f4cc3cb828ff6ef08a4aca0bf7b4c1aa7539 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:37:49.514604  697202 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem (1338 bytes)
	W1207 23:37:49.514646  697202 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125_empty.pem, impossibly tiny 0 bytes
	I1207 23:37:49.514657  697202 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 23:37:49.514680  697202 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/ca.pem (1082 bytes)
	I1207 23:37:49.514704  697202 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/cert.pem (1123 bytes)
	I1207 23:37:49.514733  697202 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/certs/key.pem (1675 bytes)
	I1207 23:37:49.514772  697202 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem (1708 bytes)
	I1207 23:37:49.515456  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 23:37:49.535595  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 23:37:49.554113  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 23:37:49.572786  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 23:37:49.590631  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1207 23:37:49.608351  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 23:37:49.626842  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 23:37:49.646114  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/calico-600852/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 23:37:49.666129  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/certs/393125.pem --> /usr/share/ca-certificates/393125.pem (1338 bytes)
	I1207 23:37:49.686982  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/ssl/certs/3931252.pem --> /usr/share/ca-certificates/3931252.pem (1708 bytes)
	I1207 23:37:49.705972  697202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 23:37:49.724847  697202 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 23:37:49.738191  697202 ssh_runner.go:195] Run: openssl version
	I1207 23:37:49.744883  697202 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:49.752662  697202 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1207 23:37:49.761172  697202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:49.766501  697202 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  7 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:49.766563  697202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 23:37:49.807755  697202 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1207 23:37:49.817966  697202 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1207 23:37:49.826049  697202 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/393125.pem
	I1207 23:37:49.834554  697202 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/393125.pem /etc/ssl/certs/393125.pem
	I1207 23:37:49.843290  697202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/393125.pem
	I1207 23:37:49.847578  697202 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  7 23:04 /usr/share/ca-certificates/393125.pem
	I1207 23:37:49.847639  697202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/393125.pem
	I1207 23:37:49.890621  697202 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1207 23:37:49.898845  697202 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/393125.pem /etc/ssl/certs/51391683.0
	I1207 23:37:49.907068  697202 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3931252.pem
	I1207 23:37:49.915401  697202 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3931252.pem /etc/ssl/certs/3931252.pem
	I1207 23:37:49.923873  697202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3931252.pem
	I1207 23:37:49.928171  697202 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  7 23:04 /usr/share/ca-certificates/3931252.pem
	I1207 23:37:49.928235  697202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3931252.pem
	I1207 23:37:49.966142  697202 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1207 23:37:49.974965  697202 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3931252.pem /etc/ssl/certs/3ec20f2e.0
	I1207 23:37:49.984229  697202 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1207 23:37:49.988486  697202 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1207 23:37:49.988558  697202 kubeadm.go:401] StartCluster: {Name:calico-600852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:calico-600852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:37:49.988634  697202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1207 23:37:49.988698  697202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1207 23:37:50.018495  697202 cri.go:89] found id: ""
	I1207 23:37:50.018561  697202 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 23:37:50.027140  697202 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 23:37:50.037313  697202 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1207 23:37:50.037409  697202 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 23:37:50.045834  697202 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 23:37:50.045857  697202 kubeadm.go:158] found existing configuration files:
	
	I1207 23:37:50.045901  697202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1207 23:37:50.054279  697202 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1207 23:37:50.054452  697202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1207 23:37:50.063032  697202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1207 23:37:50.071466  697202 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1207 23:37:50.071529  697202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1207 23:37:50.079305  697202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1207 23:37:50.087412  697202 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1207 23:37:50.087483  697202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1207 23:37:50.095229  697202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1207 23:37:50.103530  697202 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1207 23:37:50.103596  697202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1207 23:37:50.111962  697202 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1207 23:37:50.153270  697202 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1207 23:37:50.153384  697202 kubeadm.go:319] [preflight] Running pre-flight checks
	I1207 23:37:50.174426  697202 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1207 23:37:50.174507  697202 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1207 23:37:50.174577  697202 kubeadm.go:319] OS: Linux
	I1207 23:37:50.174671  697202 kubeadm.go:319] CGROUPS_CPU: enabled
	I1207 23:37:50.174741  697202 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1207 23:37:50.174806  697202 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1207 23:37:50.174852  697202 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1207 23:37:50.174923  697202 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1207 23:37:50.174999  697202 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1207 23:37:50.175089  697202 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1207 23:37:50.175160  697202 kubeadm.go:319] CGROUPS_IO: enabled
	I1207 23:37:50.241958  697202 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 23:37:50.242108  697202 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 23:37:50.242249  697202 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1207 23:37:50.249773  697202 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 23:37:50.256443  697202 out.go:252]   - Generating certificates and keys ...
	I1207 23:37:50.256557  697202 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1207 23:37:50.256645  697202 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1207 23:37:49.814207  697240 out.go:252]   - Booting up control plane ...
	I1207 23:37:49.814372  697240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 23:37:49.814485  697240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 23:37:49.816445  697240 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 23:37:49.831652  697240 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 23:37:49.831898  697240 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1207 23:37:49.840067  697240 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1207 23:37:49.840501  697240 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 23:37:49.840568  697240 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1207 23:37:49.955871  697240 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1207 23:37:49.956042  697240 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1207 23:37:48.457062  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:50.957365  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:50.533839  687309 pod_ready.go:104] pod "coredns-66bc5c9577-p4v2f" is not "Ready", error: <nil>
	I1207 23:37:51.533645  687309 pod_ready.go:94] pod "coredns-66bc5c9577-p4v2f" is "Ready"
	I1207 23:37:51.533680  687309 pod_ready.go:86] duration metric: took 35.506908878s for pod "coredns-66bc5c9577-p4v2f" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:51.536614  687309 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:51.540857  687309 pod_ready.go:94] pod "etcd-default-k8s-diff-port-312944" is "Ready"
	I1207 23:37:51.540881  687309 pod_ready.go:86] duration metric: took 4.240955ms for pod "etcd-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:51.542925  687309 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:51.546931  687309 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-312944" is "Ready"
	I1207 23:37:51.546955  687309 pod_ready.go:86] duration metric: took 4.009116ms for pod "kube-apiserver-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:51.548947  687309 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:51.733165  687309 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-312944" is "Ready"
	I1207 23:37:51.733197  687309 pod_ready.go:86] duration metric: took 184.229643ms for pod "kube-controller-manager-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:51.931764  687309 pod_ready.go:83] waiting for pod "kube-proxy-7stg5" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:52.330433  687309 pod_ready.go:94] pod "kube-proxy-7stg5" is "Ready"
	I1207 23:37:52.330464  687309 pod_ready.go:86] duration metric: took 398.673038ms for pod "kube-proxy-7stg5" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:52.532189  687309 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:52.930982  687309 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-312944" is "Ready"
	I1207 23:37:52.931018  687309 pod_ready.go:86] duration metric: took 398.79821ms for pod "kube-scheduler-default-k8s-diff-port-312944" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:52.931033  687309 pod_ready.go:40] duration metric: took 36.908392392s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:37:52.982802  687309 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1207 23:37:52.984436  687309 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-312944" cluster and "default" namespace by default
	I1207 23:37:50.421391  697202 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 23:37:50.554773  697202 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1207 23:37:50.658025  697202 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1207 23:37:50.866863  697202 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1207 23:37:51.050985  697202 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1207 23:37:51.051159  697202 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-600852 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1207 23:37:51.209204  697202 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1207 23:37:51.209612  697202 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-600852 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1207 23:37:51.560636  697202 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 23:37:51.846163  697202 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 23:37:52.203239  697202 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1207 23:37:52.203576  697202 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 23:37:52.580235  697202 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 23:37:52.898080  697202 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1207 23:37:53.671477  697202 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 23:37:53.753966  697202 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 23:37:53.848471  697202 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 23:37:53.849350  697202 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 23:37:53.853007  697202 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 23:37:50.957502  697240 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001823568s
	I1207 23:37:50.962005  697240 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1207 23:37:50.962147  697240 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1207 23:37:50.962242  697240 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1207 23:37:50.962344  697240 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1207 23:37:52.308872  697240 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.346774352s
	I1207 23:37:52.794936  697240 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.83266571s
	I1207 23:37:54.464007  697240 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501991911s
	I1207 23:37:54.481832  697240 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 23:37:54.506812  697240 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 23:37:54.518357  697240 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 23:37:54.518677  697240 kubeadm.go:319] [mark-control-plane] Marking the node custom-flannel-600852 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 23:37:54.527249  697240 kubeadm.go:319] [bootstrap-token] Using token: 3n01no.dt369lpba9g6frnf
	I1207 23:37:53.857851  697202 out.go:252]   - Booting up control plane ...
	I1207 23:37:53.858009  697202 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 23:37:53.858119  697202 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 23:37:53.858210  697202 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 23:37:53.873295  697202 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 23:37:53.873437  697202 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1207 23:37:53.883292  697202 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1207 23:37:53.883755  697202 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 23:37:53.883864  697202 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1207 23:37:54.007036  697202 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1207 23:37:54.007234  697202 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1207 23:37:55.008696  697202 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001763784s
	I1207 23:37:55.012005  697202 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1207 23:37:55.012134  697202 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1207 23:37:55.012284  697202 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1207 23:37:55.012435  697202 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1207 23:37:54.528778  697240 out.go:252]   - Configuring RBAC rules ...
	I1207 23:37:54.528924  697240 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 23:37:54.532897  697240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 23:37:54.539239  697240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 23:37:54.542220  697240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 23:37:54.544999  697240 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 23:37:54.547719  697240 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 23:37:54.871561  697240 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 23:37:55.287125  697240 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1207 23:37:55.871441  697240 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1207 23:37:55.872880  697240 kubeadm.go:319] 
	I1207 23:37:55.873132  697240 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1207 23:37:55.873149  697240 kubeadm.go:319] 
	I1207 23:37:55.873312  697240 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1207 23:37:55.873321  697240 kubeadm.go:319] 
	I1207 23:37:55.873384  697240 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1207 23:37:55.873477  697240 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 23:37:55.873548  697240 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 23:37:55.873561  697240 kubeadm.go:319] 
	I1207 23:37:55.873622  697240 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1207 23:37:55.873629  697240 kubeadm.go:319] 
	I1207 23:37:55.873686  697240 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 23:37:55.873692  697240 kubeadm.go:319] 
	I1207 23:37:55.873749  697240 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1207 23:37:55.874010  697240 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 23:37:55.874128  697240 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 23:37:55.874138  697240 kubeadm.go:319] 
	I1207 23:37:55.874270  697240 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 23:37:55.874416  697240 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1207 23:37:55.874426  697240 kubeadm.go:319] 
	I1207 23:37:55.874544  697240 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3n01no.dt369lpba9g6frnf \
	I1207 23:37:55.874707  697240 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a6f9ffe32c21ad638ebba2743e15f014ccba55b6baef971adb92cbf8edf27a49 \
	I1207 23:37:55.874738  697240 kubeadm.go:319] 	--control-plane 
	I1207 23:37:55.874743  697240 kubeadm.go:319] 
	I1207 23:37:55.874916  697240 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1207 23:37:55.874931  697240 kubeadm.go:319] 
	I1207 23:37:55.875051  697240 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3n01no.dt369lpba9g6frnf \
	I1207 23:37:55.875200  697240 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a6f9ffe32c21ad638ebba2743e15f014ccba55b6baef971adb92cbf8edf27a49 
	I1207 23:37:55.878393  697240 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1207 23:37:55.878573  697240 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 23:37:55.878613  697240 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1207 23:37:55.880468  697240 out.go:179] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	W1207 23:37:52.957500  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	W1207 23:37:55.457377  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	I1207 23:37:56.706613  697202 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.694503617s
	I1207 23:37:57.076842  697202 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.064835924s
	I1207 23:37:59.013801  697202 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001721609s
	I1207 23:37:59.030204  697202 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 23:37:59.041137  697202 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 23:37:59.051015  697202 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 23:37:59.051275  697202 kubeadm.go:319] [mark-control-plane] Marking the node calico-600852 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 23:37:59.059318  697202 kubeadm.go:319] [bootstrap-token] Using token: k6if0t.dzl4572wdn2qqw88
	I1207 23:37:59.061529  697202 out.go:252]   - Configuring RBAC rules ...
	I1207 23:37:59.061681  697202 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 23:37:59.066129  697202 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 23:37:59.073546  697202 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 23:37:59.076751  697202 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 23:37:59.079275  697202 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 23:37:59.083840  697202 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 23:37:59.420473  697202 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 23:37:59.834460  697202 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1207 23:38:00.419356  697202 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1207 23:38:00.420472  697202 kubeadm.go:319] 
	I1207 23:38:00.420573  697202 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1207 23:38:00.420585  697202 kubeadm.go:319] 
	I1207 23:38:00.420691  697202 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1207 23:38:00.420702  697202 kubeadm.go:319] 
	I1207 23:38:00.420737  697202 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1207 23:38:00.420823  697202 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 23:38:00.420899  697202 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 23:38:00.420908  697202 kubeadm.go:319] 
	I1207 23:38:00.420986  697202 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1207 23:38:00.420996  697202 kubeadm.go:319] 
	I1207 23:38:00.421077  697202 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 23:38:00.421087  697202 kubeadm.go:319] 
	I1207 23:38:00.421161  697202 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1207 23:38:00.421269  697202 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 23:38:00.421397  697202 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 23:38:00.421408  697202 kubeadm.go:319] 
	I1207 23:38:00.421539  697202 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 23:38:00.421651  697202 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1207 23:38:00.421660  697202 kubeadm.go:319] 
	I1207 23:38:00.421770  697202 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token k6if0t.dzl4572wdn2qqw88 \
	I1207 23:38:00.421923  697202 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a6f9ffe32c21ad638ebba2743e15f014ccba55b6baef971adb92cbf8edf27a49 \
	I1207 23:38:00.421956  697202 kubeadm.go:319] 	--control-plane 
	I1207 23:38:00.421965  697202 kubeadm.go:319] 
	I1207 23:38:00.422086  697202 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1207 23:38:00.422095  697202 kubeadm.go:319] 
	I1207 23:38:00.422195  697202 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token k6if0t.dzl4572wdn2qqw88 \
	I1207 23:38:00.422342  697202 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a6f9ffe32c21ad638ebba2743e15f014ccba55b6baef971adb92cbf8edf27a49 
	I1207 23:38:00.425705  697202 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1207 23:38:00.425899  697202 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 23:38:00.425918  697202 cni.go:84] Creating CNI manager for "calico"
	I1207 23:38:00.427838  697202 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1207 23:37:55.881576  697240 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1207 23:37:55.881637  697240 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1207 23:37:55.885946  697240 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1207 23:37:55.885972  697240 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I1207 23:37:55.906284  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1207 23:37:56.308367  697240 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 23:37:56.308416  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:56.308544  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-600852 minikube.k8s.io/updated_at=2025_12_07T23_37_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47 minikube.k8s.io/name=custom-flannel-600852 minikube.k8s.io/primary=true
	I1207 23:37:56.407068  697240 ops.go:34] apiserver oom_adj: -16
	I1207 23:37:56.407202  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:56.907400  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:57.407404  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:57.907560  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:58.408072  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:58.907489  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:59.407858  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:37:59.907568  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:38:00.407401  697240 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:38:00.483488  697240 kubeadm.go:1114] duration metric: took 4.175116114s to wait for elevateKubeSystemPrivileges
	I1207 23:38:00.483533  697240 kubeadm.go:403] duration metric: took 13.83737016s to StartCluster
	I1207 23:38:00.483556  697240 settings.go:142] acquiring lock: {Name:mk372e79badb9c8f25216fa891cff6dfa96ea2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:38:00.483633  697240 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:38:00.485027  697240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:38:00.485293  697240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 23:38:00.485297  697240 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:38:00.485406  697240 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1207 23:38:00.485500  697240 addons.go:70] Setting storage-provisioner=true in profile "custom-flannel-600852"
	I1207 23:38:00.485519  697240 addons.go:70] Setting default-storageclass=true in profile "custom-flannel-600852"
	I1207 23:38:00.485531  697240 addons.go:239] Setting addon storage-provisioner=true in "custom-flannel-600852"
	I1207 23:38:00.485546  697240 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-600852"
	I1207 23:38:00.485568  697240 host.go:66] Checking if "custom-flannel-600852" exists ...
	I1207 23:38:00.485526  697240 config.go:182] Loaded profile config "custom-flannel-600852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:38:00.485954  697240 cli_runner.go:164] Run: docker container inspect custom-flannel-600852 --format={{.State.Status}}
	I1207 23:38:00.486106  697240 cli_runner.go:164] Run: docker container inspect custom-flannel-600852 --format={{.State.Status}}
	I1207 23:38:00.486903  697240 out.go:179] * Verifying Kubernetes components...
	I1207 23:38:00.488169  697240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:38:00.512789  697240 addons.go:239] Setting addon default-storageclass=true in "custom-flannel-600852"
	I1207 23:38:00.512842  697240 host.go:66] Checking if "custom-flannel-600852" exists ...
	I1207 23:38:00.513350  697240 cli_runner.go:164] Run: docker container inspect custom-flannel-600852 --format={{.State.Status}}
	I1207 23:38:00.515122  697240 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1207 23:37:57.957064  684670 node_ready.go:57] node "kindnet-600852" has "Ready":"False" status (will retry)
	I1207 23:37:58.957197  684670 node_ready.go:49] node "kindnet-600852" is "Ready"
	I1207 23:37:58.957236  684670 node_ready.go:38] duration metric: took 41.503819012s for node "kindnet-600852" to be "Ready" ...
	I1207 23:37:58.957256  684670 api_server.go:52] waiting for apiserver process to appear ...
	I1207 23:37:58.957318  684670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:37:58.971503  684670 api_server.go:72] duration metric: took 41.802323361s to wait for apiserver process to appear ...
	I1207 23:37:58.971533  684670 api_server.go:88] waiting for apiserver healthz status ...
	I1207 23:37:58.971552  684670 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1207 23:37:58.977257  684670 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1207 23:37:58.978268  684670 api_server.go:141] control plane version: v1.34.2
	I1207 23:37:58.978297  684670 api_server.go:131] duration metric: took 6.756228ms to wait for apiserver health ...
	I1207 23:37:58.978308  684670 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 23:37:58.982434  684670 system_pods.go:59] 8 kube-system pods found
	I1207 23:37:58.982469  684670 system_pods.go:61] "coredns-66bc5c9577-8rwsj" [d85f99d6-a1ba-4cfc-bcdc-aac22ea4af3e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:37:58.982476  684670 system_pods.go:61] "etcd-kindnet-600852" [adf5b308-5358-4d7b-9df5-bafffa61f8b6] Running
	I1207 23:37:58.982482  684670 system_pods.go:61] "kindnet-vzkfg" [87c7cd14-d729-423a-a43f-bdb77eaeba04] Running
	I1207 23:37:58.982485  684670 system_pods.go:61] "kube-apiserver-kindnet-600852" [3c3cfd49-d544-4dfb-bf4f-7894225a944c] Running
	I1207 23:37:58.982488  684670 system_pods.go:61] "kube-controller-manager-kindnet-600852" [502a4d63-dedc-4a8b-a1ea-be9a16e72fb6] Running
	I1207 23:37:58.982493  684670 system_pods.go:61] "kube-proxy-nmxm2" [21011e1c-6722-4e63-9731-1af680bb14f2] Running
	I1207 23:37:58.982496  684670 system_pods.go:61] "kube-scheduler-kindnet-600852" [3193f27f-1ba4-4432-b4d5-7f6af3c32df6] Running
	I1207 23:37:58.982501  684670 system_pods.go:61] "storage-provisioner" [e9d9092f-ca1a-4cf3-bbbd-b284d49b2f12] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:37:58.982508  684670 system_pods.go:74] duration metric: took 4.193925ms to wait for pod list to return data ...
	I1207 23:37:58.982519  684670 default_sa.go:34] waiting for default service account to be created ...
	I1207 23:37:58.985103  684670 default_sa.go:45] found service account: "default"
	I1207 23:37:58.985121  684670 default_sa.go:55] duration metric: took 2.596819ms for default service account to be created ...
	I1207 23:37:58.985130  684670 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 23:37:58.987871  684670 system_pods.go:86] 8 kube-system pods found
	I1207 23:37:58.987899  684670 system_pods.go:89] "coredns-66bc5c9577-8rwsj" [d85f99d6-a1ba-4cfc-bcdc-aac22ea4af3e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 23:37:58.987905  684670 system_pods.go:89] "etcd-kindnet-600852" [adf5b308-5358-4d7b-9df5-bafffa61f8b6] Running
	I1207 23:37:58.987912  684670 system_pods.go:89] "kindnet-vzkfg" [87c7cd14-d729-423a-a43f-bdb77eaeba04] Running
	I1207 23:37:58.987918  684670 system_pods.go:89] "kube-apiserver-kindnet-600852" [3c3cfd49-d544-4dfb-bf4f-7894225a944c] Running
	I1207 23:37:58.987923  684670 system_pods.go:89] "kube-controller-manager-kindnet-600852" [502a4d63-dedc-4a8b-a1ea-be9a16e72fb6] Running
	I1207 23:37:58.987928  684670 system_pods.go:89] "kube-proxy-nmxm2" [21011e1c-6722-4e63-9731-1af680bb14f2] Running
	I1207 23:37:58.987936  684670 system_pods.go:89] "kube-scheduler-kindnet-600852" [3193f27f-1ba4-4432-b4d5-7f6af3c32df6] Running
	I1207 23:37:58.987943  684670 system_pods.go:89] "storage-provisioner" [e9d9092f-ca1a-4cf3-bbbd-b284d49b2f12] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 23:37:58.987972  684670 retry.go:31] will retry after 237.710109ms: missing components: kube-dns
	I1207 23:37:59.231977  684670 system_pods.go:86] 8 kube-system pods found
	I1207 23:37:59.232015  684670 system_pods.go:89] "coredns-66bc5c9577-8rwsj" [d85f99d6-a1ba-4cfc-bcdc-aac22ea4af3e] Running
	I1207 23:37:59.232024  684670 system_pods.go:89] "etcd-kindnet-600852" [adf5b308-5358-4d7b-9df5-bafffa61f8b6] Running
	I1207 23:37:59.232029  684670 system_pods.go:89] "kindnet-vzkfg" [87c7cd14-d729-423a-a43f-bdb77eaeba04] Running
	I1207 23:37:59.232038  684670 system_pods.go:89] "kube-apiserver-kindnet-600852" [3c3cfd49-d544-4dfb-bf4f-7894225a944c] Running
	I1207 23:37:59.232044  684670 system_pods.go:89] "kube-controller-manager-kindnet-600852" [502a4d63-dedc-4a8b-a1ea-be9a16e72fb6] Running
	I1207 23:37:59.232050  684670 system_pods.go:89] "kube-proxy-nmxm2" [21011e1c-6722-4e63-9731-1af680bb14f2] Running
	I1207 23:37:59.232053  684670 system_pods.go:89] "kube-scheduler-kindnet-600852" [3193f27f-1ba4-4432-b4d5-7f6af3c32df6] Running
	I1207 23:37:59.232056  684670 system_pods.go:89] "storage-provisioner" [e9d9092f-ca1a-4cf3-bbbd-b284d49b2f12] Running
	I1207 23:37:59.232067  684670 system_pods.go:126] duration metric: took 246.92981ms to wait for k8s-apps to be running ...
	I1207 23:37:59.232078  684670 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 23:37:59.232138  684670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:37:59.245442  684670 system_svc.go:56] duration metric: took 13.353564ms WaitForService to wait for kubelet
	I1207 23:37:59.245484  684670 kubeadm.go:587] duration metric: took 42.076307711s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 23:37:59.245510  684670 node_conditions.go:102] verifying NodePressure condition ...
	I1207 23:37:59.248701  684670 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1207 23:37:59.248728  684670 node_conditions.go:123] node cpu capacity is 8
	I1207 23:37:59.248746  684670 node_conditions.go:105] duration metric: took 3.230624ms to run NodePressure ...
	I1207 23:37:59.248759  684670 start.go:242] waiting for startup goroutines ...
	I1207 23:37:59.248765  684670 start.go:247] waiting for cluster config update ...
	I1207 23:37:59.248776  684670 start.go:256] writing updated cluster config ...
	I1207 23:37:59.249023  684670 ssh_runner.go:195] Run: rm -f paused
	I1207 23:37:59.253030  684670 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:37:59.256120  684670 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8rwsj" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:59.260754  684670 pod_ready.go:94] pod "coredns-66bc5c9577-8rwsj" is "Ready"
	I1207 23:37:59.260779  684670 pod_ready.go:86] duration metric: took 4.635052ms for pod "coredns-66bc5c9577-8rwsj" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:59.262839  684670 pod_ready.go:83] waiting for pod "etcd-kindnet-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:59.266580  684670 pod_ready.go:94] pod "etcd-kindnet-600852" is "Ready"
	I1207 23:37:59.266603  684670 pod_ready.go:86] duration metric: took 3.743047ms for pod "etcd-kindnet-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:59.268578  684670 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:59.272152  684670 pod_ready.go:94] pod "kube-apiserver-kindnet-600852" is "Ready"
	I1207 23:37:59.272172  684670 pod_ready.go:86] duration metric: took 3.574542ms for pod "kube-apiserver-kindnet-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:59.273892  684670 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:59.658282  684670 pod_ready.go:94] pod "kube-controller-manager-kindnet-600852" is "Ready"
	I1207 23:37:59.658312  684670 pod_ready.go:86] duration metric: took 384.397806ms for pod "kube-controller-manager-kindnet-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:37:59.857388  684670 pod_ready.go:83] waiting for pod "kube-proxy-nmxm2" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:38:00.256701  684670 pod_ready.go:94] pod "kube-proxy-nmxm2" is "Ready"
	I1207 23:38:00.256732  684670 pod_ready.go:86] duration metric: took 399.315784ms for pod "kube-proxy-nmxm2" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:38:00.457202  684670 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:38:00.857454  684670 pod_ready.go:94] pod "kube-scheduler-kindnet-600852" is "Ready"
	I1207 23:38:00.857489  684670 pod_ready.go:86] duration metric: took 400.253299ms for pod "kube-scheduler-kindnet-600852" in "kube-system" namespace to be "Ready" or be gone ...
	I1207 23:38:00.857505  684670 pod_ready.go:40] duration metric: took 1.604447924s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1207 23:38:00.927724  684670 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1207 23:38:00.929471  684670 out.go:179] * Done! kubectl is now configured to use "kindnet-600852" cluster and "default" namespace by default
	I1207 23:38:00.519526  697240 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:38:00.519553  697240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 23:38:00.519624  697240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-600852
	I1207 23:38:00.545899  697240 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 23:38:00.546113  697240 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 23:38:00.546194  697240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-600852
	I1207 23:38:00.552833  697240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/custom-flannel-600852/id_rsa Username:docker}
	I1207 23:38:00.575061  697240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/custom-flannel-600852/id_rsa Username:docker}
	I1207 23:38:00.602243  697240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 23:38:00.660619  697240 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:38:00.680961  697240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:38:00.701930  697240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 23:38:00.864755  697240 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1207 23:38:00.867777  697240 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-600852" to be "Ready" ...
	I1207 23:38:01.119300  697240 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1207 23:38:00.429434  697202 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1207 23:38:00.429461  697202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329943 bytes)
	I1207 23:38:00.445271  697202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1207 23:38:01.472849  697202 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.027539027s)
	I1207 23:38:01.472924  697202 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 23:38:01.473006  697202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:38:01.473006  697202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-600852 minikube.k8s.io/updated_at=2025_12_07T23_38_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47 minikube.k8s.io/name=calico-600852 minikube.k8s.io/primary=true
	I1207 23:38:01.482625  697202 ops.go:34] apiserver oom_adj: -16
	I1207 23:38:01.545076  697202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:38:02.045428  697202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:38:02.545932  697202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:38:03.045228  697202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:38:03.545668  697202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:38:04.045744  697202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:38:04.546072  697202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 23:38:04.627386  697202 kubeadm.go:1114] duration metric: took 3.154450039s to wait for elevateKubeSystemPrivileges
	I1207 23:38:04.627434  697202 kubeadm.go:403] duration metric: took 14.638878278s to StartCluster
	I1207 23:38:04.627468  697202 settings.go:142] acquiring lock: {Name:mk372e79badb9c8f25216fa891cff6dfa96ea2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:38:04.627559  697202 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:38:04.629712  697202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-389542/kubeconfig: {Name:mkef1ae59f6ce8b6b897800cfb5b8c0e579f2040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 23:38:04.630019  697202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 23:38:04.630034  697202 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1207 23:38:04.630141  697202 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1207 23:38:04.630241  697202 addons.go:70] Setting storage-provisioner=true in profile "calico-600852"
	I1207 23:38:04.630262  697202 addons.go:239] Setting addon storage-provisioner=true in "calico-600852"
	I1207 23:38:04.630262  697202 addons.go:70] Setting default-storageclass=true in profile "calico-600852"
	I1207 23:38:04.630283  697202 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-600852"
	I1207 23:38:04.630296  697202 host.go:66] Checking if "calico-600852" exists ...
	I1207 23:38:04.630712  697202 cli_runner.go:164] Run: docker container inspect calico-600852 --format={{.State.Status}}
	I1207 23:38:04.630911  697202 cli_runner.go:164] Run: docker container inspect calico-600852 --format={{.State.Status}}
	I1207 23:38:04.630953  697202 config.go:182] Loaded profile config "calico-600852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:38:04.633011  697202 out.go:179] * Verifying Kubernetes components...
	I1207 23:38:04.634261  697202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 23:38:04.661277  697202 addons.go:239] Setting addon default-storageclass=true in "calico-600852"
	I1207 23:38:04.661336  697202 host.go:66] Checking if "calico-600852" exists ...
	I1207 23:38:04.661838  697202 cli_runner.go:164] Run: docker container inspect calico-600852 --format={{.State.Status}}
	I1207 23:38:04.665433  697202 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 23:38:04.667473  697202 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:38:04.667495  697202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 23:38:04.667561  697202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-600852
	I1207 23:38:04.696447  697202 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 23:38:04.696474  697202 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 23:38:04.696633  697202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-600852
	I1207 23:38:04.704363  697202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/calico-600852/id_rsa Username:docker}
	I1207 23:38:04.729350  697202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/calico-600852/id_rsa Username:docker}
	I1207 23:38:04.760949  697202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 23:38:04.835500  697202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1207 23:38:04.857304  697202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 23:38:04.885364  697202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 23:38:05.074233  697202 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1207 23:38:05.076126  697202 node_ready.go:35] waiting up to 15m0s for node "calico-600852" to be "Ready" ...
	I1207 23:38:05.395523  697202 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1207 23:38:01.121548  697240 addons.go:530] duration metric: took 636.135374ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1207 23:38:01.369973  697240 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-600852" context rescaled to 1 replicas
	W1207 23:38:02.871108  697240 node_ready.go:57] node "custom-flannel-600852" has "Ready":"False" status (will retry)
	W1207 23:38:04.876490  697240 node_ready.go:57] node "custom-flannel-600852" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 07 23:37:25 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:25.5316673Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 07 23:37:25 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:25.536425089Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 07 23:37:25 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:25.536457627Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 07 23:37:37 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:37.708029594Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=584dca10-c7b5-4712-a9ae-7af36b03f00c name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:37:37 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:37.710944905Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5973ec6b-5ee8-4d69-94a6-cfb4b1e0a76d name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:37:37 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:37.714272384Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2qmt/dashboard-metrics-scraper" id=a8bde38e-2572-4e21-b53c-ddcd79685cdb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:37:37 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:37.714452342Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:37 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:37.723144759Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:37 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:37.723661297Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:37 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:37.763502533Z" level=info msg="Created container 97a5b2897354b4d5337d92f0bb24a680df6f27de664ccfb0f4e72604947f4e42: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2qmt/dashboard-metrics-scraper" id=a8bde38e-2572-4e21-b53c-ddcd79685cdb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:37:37 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:37.764442981Z" level=info msg="Starting container: 97a5b2897354b4d5337d92f0bb24a680df6f27de664ccfb0f4e72604947f4e42" id=b386a165-c253-41dd-a76c-a8b9608c5427 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:37:37 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:37.767119416Z" level=info msg="Started container" PID=1771 containerID=97a5b2897354b4d5337d92f0bb24a680df6f27de664ccfb0f4e72604947f4e42 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2qmt/dashboard-metrics-scraper id=b386a165-c253-41dd-a76c-a8b9608c5427 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f10901483b03ef3a341449437aabf6b005d605472b49a06f6776d08aaaf33d7d
	Dec 07 23:37:37 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:37.814445032Z" level=info msg="Removing container: 3efe7df9fe00bad6c4287136d3c2c464b8278703353f2ab4ceeec6f81df30d21" id=ab9e3595-7450-4b77-a9c9-fc02486dbb81 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 07 23:37:37 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:37.827658716Z" level=info msg="Removed container 3efe7df9fe00bad6c4287136d3c2c464b8278703353f2ab4ceeec6f81df30d21: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2qmt/dashboard-metrics-scraper" id=ab9e3595-7450-4b77-a9c9-fc02486dbb81 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 07 23:37:45 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:45.83695981Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=afc64b4c-9034-4557-a979-51ebb52d7441 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:37:45 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:45.837982762Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=77d56115-de8e-423f-a0cb-320dd9e77553 name=/runtime.v1.ImageService/ImageStatus
	Dec 07 23:37:45 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:45.839074822Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=bf4775b2-0888-481d-b47c-9b102d975fb1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:37:45 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:45.839215044Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:45 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:45.844614736Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:45 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:45.844817298Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/bf221568ff80d2378228bc4a14119dc06590041f1740374c704bc029478880ac/merged/etc/passwd: no such file or directory"
	Dec 07 23:37:45 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:45.8448541Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/bf221568ff80d2378228bc4a14119dc06590041f1740374c704bc029478880ac/merged/etc/group: no such file or directory"
	Dec 07 23:37:45 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:45.845153746Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 07 23:37:45 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:45.87669398Z" level=info msg="Created container 058865ddda268775bdf21f4e133779ac38c262c9ded903bf758c68c656ba4b37: kube-system/storage-provisioner/storage-provisioner" id=bf4775b2-0888-481d-b47c-9b102d975fb1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 07 23:37:45 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:45.87740659Z" level=info msg="Starting container: 058865ddda268775bdf21f4e133779ac38c262c9ded903bf758c68c656ba4b37" id=331bdfc9-8a64-4939-9149-df11951271c0 name=/runtime.v1.RuntimeService/StartContainer
	Dec 07 23:37:45 default-k8s-diff-port-312944 crio[569]: time="2025-12-07T23:37:45.879412295Z" level=info msg="Started container" PID=1785 containerID=058865ddda268775bdf21f4e133779ac38c262c9ded903bf758c68c656ba4b37 description=kube-system/storage-provisioner/storage-provisioner id=331bdfc9-8a64-4939-9149-df11951271c0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1cc2a364fc405aa25bc4b6ba5d1d291a8384751748807ea72fcd5ef6b9803965
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	058865ddda268       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   1cc2a364fc405       storage-provisioner                                    kube-system
	97a5b2897354b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago      Exited              dashboard-metrics-scraper   2                   f10901483b03e       dashboard-metrics-scraper-6ffb444bf9-l2qmt             kubernetes-dashboard
	d0dece358b07a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   2a694ad6924b0       kubernetes-dashboard-855c9754f9-x7hx7                  kubernetes-dashboard
	4e915a09b78e0       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   f18c93ac1698b       busybox                                                default
	8eb4661f40adb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   34834d5f4639a       coredns-66bc5c9577-p4v2f                               kube-system
	ae571d49269c9       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           55 seconds ago      Running             kube-proxy                  0                   b1ad43600cd73       kube-proxy-7stg5                                       kube-system
	1141bc53141e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   1cc2a364fc405       storage-provisioner                                    kube-system
	03d7391848685       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   705cd5cd1c701       kindnet-55xbl                                          kube-system
	362b83f015210       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           58 seconds ago      Running             kube-scheduler              0                   e357e5f1e3cb6       kube-scheduler-default-k8s-diff-port-312944            kube-system
	fa639c7294ee1       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           58 seconds ago      Running             kube-controller-manager     0                   9beb065dece42       kube-controller-manager-default-k8s-diff-port-312944   kube-system
	b04410a9187c7       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           58 seconds ago      Running             kube-apiserver              0                   64f04b32bfd74       kube-apiserver-default-k8s-diff-port-312944            kube-system
	f27c08f4d2ee8       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           58 seconds ago      Running             etcd                        0                   26acf5ba3f8e7       etcd-default-k8s-diff-port-312944                      kube-system
	
	
	==> coredns [8eb4661f40adb7e3bc509b1d373b2ad35becf93ce0d8b257ae68088048cea1a3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42156 - 52497 "HINFO IN 9139562123407335876.5391113358729451137. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029737409s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-312944
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-312944
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47
	                    minikube.k8s.io/name=default-k8s-diff-port-312944
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_07T23_36_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 07 Dec 2025 23:36:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-312944
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 07 Dec 2025 23:37:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 07 Dec 2025 23:37:45 +0000   Sun, 07 Dec 2025 23:36:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 07 Dec 2025 23:37:45 +0000   Sun, 07 Dec 2025 23:36:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 07 Dec 2025 23:37:45 +0000   Sun, 07 Dec 2025 23:36:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 07 Dec 2025 23:37:45 +0000   Sun, 07 Dec 2025 23:36:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-312944
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                bd0038bf-5fca-4fcf-bfc4-04aff0b70aa3
	  Boot ID:                    9abaf27f-ec91-40bd-9319-d1c86dd34102
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-p4v2f                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-default-k8s-diff-port-312944                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-55xbl                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-default-k8s-diff-port-312944             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-312944    200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-7stg5                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-default-k8s-diff-port-312944             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-l2qmt              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-x7hx7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 110s                 kube-proxy       
	  Normal  Starting                 55s                  kube-proxy       
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m1s (x8 over 2m1s)  kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s (x8 over 2m1s)  kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s (x8 over 2m1s)  kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s                 node-controller  Node default-k8s-diff-port-312944 event: Registered Node default-k8s-diff-port-312944 in Controller
	  Normal  NodeReady                100s                 kubelet          Node default-k8s-diff-port-312944 status is now: NodeReady
	  Normal  Starting                 59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)    kubelet          Node default-k8s-diff-port-312944 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                  node-controller  Node default-k8s-diff-port-312944 event: Registered Node default-k8s-diff-port-312944 in Controller
	
	
	==> dmesg <==
	[  +0.006319] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.495443] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006323] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494714] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.006745] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.494455] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007157] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493953] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007413] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493695] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007143] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493798] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007702] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493076] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008458] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493060] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008891] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492811] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.007996] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.493243] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008588] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.492559] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.008931] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	[  +1.491699] IPv4: martian destination 127.0.0.11 from 10.244.1.4, dev veth07dc1854
	[  +0.010378] IPv4: martian destination 127.0.0.11 from 10.244.1.2, dev vetha2ff9b92
	
	
	==> etcd [f27c08f4d2ee8d8898a367bb16db44c1f22130d15e95d71881aa776e8567269c] <==
	{"level":"warn","ts":"2025-12-07T23:37:13.629818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.639593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.649772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.658617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.669763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.679396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.688577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.697365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.707600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.716215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.724480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.732901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.740894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.748408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.756034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.763190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.770597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.778408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.784967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.792549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.812829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.821142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.830414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:13.892600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-07T23:37:40.889271Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.602571ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766684848834146 > lease_revoke:<id:5b339afb2d2945da>","response":"size:29"}
	
	
	==> kernel <==
	 23:38:10 up  2:20,  0 user,  load average: 4.34, 3.04, 2.19
	Linux default-k8s-diff-port-312944 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [03d7391848685b4e4adc0e0cbeb5a8f00b9ca0ce5cf2a95d3e89a3e413264d20] <==
	I1207 23:37:15.304805       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1207 23:37:15.305110       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1207 23:37:15.305281       1 main.go:148] setting mtu 1500 for CNI 
	I1207 23:37:15.305295       1 main.go:178] kindnetd IP family: "ipv4"
	I1207 23:37:15.305314       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-07T23:37:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1207 23:37:15.508356       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1207 23:37:15.664693       1 controller.go:381] "Waiting for informer caches to sync"
	I1207 23:37:15.664761       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1207 23:37:15.703181       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1207 23:37:16.065096       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1207 23:37:16.065126       1 metrics.go:72] Registering metrics
	I1207 23:37:16.065219       1 controller.go:711] "Syncing nftables rules"
	I1207 23:37:25.509108       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:37:25.509175       1 main.go:301] handling current node
	I1207 23:37:35.513026       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:37:35.513074       1 main.go:301] handling current node
	I1207 23:37:45.509257       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:37:45.509301       1 main.go:301] handling current node
	I1207 23:37:55.511418       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:37:55.511457       1 main.go:301] handling current node
	I1207 23:38:05.515141       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1207 23:38:05.515177       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b04410a9187c7167576fa7f9cb5bf5a761981c61b37ea3b68eb353c721baab8f] <==
	I1207 23:37:14.402469       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1207 23:37:14.405407       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1207 23:37:14.406560       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1207 23:37:14.406584       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1207 23:37:14.408473       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1207 23:37:14.408954       1 policy_source.go:240] refreshing policies
	I1207 23:37:14.406526       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1207 23:37:14.406610       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1207 23:37:14.410127       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1207 23:37:14.428149       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1207 23:37:14.433030       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1207 23:37:14.435480       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 23:37:14.739718       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1207 23:37:14.740040       1 controller.go:667] quota admission added evaluator for: namespaces
	I1207 23:37:14.777775       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1207 23:37:14.800445       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 23:37:14.809207       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 23:37:14.855242       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.127.225"}
	I1207 23:37:14.867448       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.190.220"}
	I1207 23:37:15.304529       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1207 23:37:17.895015       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 23:37:17.895068       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1207 23:37:18.243728       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:37:18.243728       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 23:37:18.494131       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [fa639c7294ee1af933ce6c68db15470c1c2d5d2c404c5e0568eaac61e7ede373] <==
	I1207 23:37:17.855712       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-312944"
	I1207 23:37:17.855767       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1207 23:37:17.861432       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1207 23:37:17.861437       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1207 23:37:17.863842       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1207 23:37:17.865888       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1207 23:37:17.867940       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1207 23:37:17.869465       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1207 23:37:17.871688       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1207 23:37:17.890403       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1207 23:37:17.890433       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1207 23:37:17.890442       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1207 23:37:17.890449       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1207 23:37:17.891646       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1207 23:37:17.891682       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1207 23:37:17.891702       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1207 23:37:17.891781       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1207 23:37:17.891874       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1207 23:37:17.891931       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1207 23:37:17.893689       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1207 23:37:17.897023       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1207 23:37:17.915251       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1207 23:37:17.918928       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1207 23:37:17.918946       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1207 23:37:17.918958       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [ae571d49269c915740fb2cf23f9df93b135ad116f7f7e358c4a59ecfac859a14] <==
	I1207 23:37:15.133164       1 server_linux.go:53] "Using iptables proxy"
	I1207 23:37:15.213255       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1207 23:37:15.314181       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1207 23:37:15.314225       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1207 23:37:15.314345       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1207 23:37:15.336886       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 23:37:15.336955       1 server_linux.go:132] "Using iptables Proxier"
	I1207 23:37:15.342948       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1207 23:37:15.343445       1 server.go:527] "Version info" version="v1.34.2"
	I1207 23:37:15.343472       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:37:15.347446       1 config.go:309] "Starting node config controller"
	I1207 23:37:15.347470       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1207 23:37:15.347492       1 config.go:403] "Starting serviceCIDR config controller"
	I1207 23:37:15.347506       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1207 23:37:15.347544       1 config.go:200] "Starting service config controller"
	I1207 23:37:15.347550       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1207 23:37:15.347572       1 config.go:106] "Starting endpoint slice config controller"
	I1207 23:37:15.347577       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1207 23:37:15.347605       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1207 23:37:15.447704       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1207 23:37:15.447730       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1207 23:37:15.447727       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [362b83f015210f03925637b1b0598b825d674607d060c054cf459ff6794854a5] <==
	I1207 23:37:13.029000       1 serving.go:386] Generated self-signed cert in-memory
	W1207 23:37:14.338760       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 23:37:14.338899       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 23:37:14.338912       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 23:37:14.338922       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 23:37:14.373269       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1207 23:37:14.373301       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 23:37:14.376902       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:37:14.377569       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 23:37:14.377274       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1207 23:37:14.377298       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1207 23:37:14.479851       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 07 23:37:18 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:18.456895     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a46c4c27-7f70-49e5-9552-52151b217b5d-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-l2qmt\" (UID: \"a46c4c27-7f70-49e5-9552-52151b217b5d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2qmt"
	Dec 07 23:37:18 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:18.456924     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xcp9\" (UniqueName: \"kubernetes.io/projected/8ab1a416-3cea-4d56-8a53-4645de22a61d-kube-api-access-2xcp9\") pod \"kubernetes-dashboard-855c9754f9-x7hx7\" (UID: \"8ab1a416-3cea-4d56-8a53-4645de22a61d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-x7hx7"
	Dec 07 23:37:21 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:21.088289     730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 07 23:37:21 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:21.760932     730 scope.go:117] "RemoveContainer" containerID="3719bbfe635f807e31451e426c963e5cf8bc57605981d2cb4d4386eac693256f"
	Dec 07 23:37:22 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:22.766550     730 scope.go:117] "RemoveContainer" containerID="3efe7df9fe00bad6c4287136d3c2c464b8278703353f2ab4ceeec6f81df30d21"
	Dec 07 23:37:22 default-k8s-diff-port-312944 kubelet[730]: E1207 23:37:22.766728     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l2qmt_kubernetes-dashboard(a46c4c27-7f70-49e5-9552-52151b217b5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2qmt" podUID="a46c4c27-7f70-49e5-9552-52151b217b5d"
	Dec 07 23:37:22 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:22.767016     730 scope.go:117] "RemoveContainer" containerID="3719bbfe635f807e31451e426c963e5cf8bc57605981d2cb4d4386eac693256f"
	Dec 07 23:37:23 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:23.772256     730 scope.go:117] "RemoveContainer" containerID="3efe7df9fe00bad6c4287136d3c2c464b8278703353f2ab4ceeec6f81df30d21"
	Dec 07 23:37:23 default-k8s-diff-port-312944 kubelet[730]: E1207 23:37:23.772482     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l2qmt_kubernetes-dashboard(a46c4c27-7f70-49e5-9552-52151b217b5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2qmt" podUID="a46c4c27-7f70-49e5-9552-52151b217b5d"
	Dec 07 23:37:24 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:24.775404     730 scope.go:117] "RemoveContainer" containerID="3efe7df9fe00bad6c4287136d3c2c464b8278703353f2ab4ceeec6f81df30d21"
	Dec 07 23:37:24 default-k8s-diff-port-312944 kubelet[730]: E1207 23:37:24.775641     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l2qmt_kubernetes-dashboard(a46c4c27-7f70-49e5-9552-52151b217b5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2qmt" podUID="a46c4c27-7f70-49e5-9552-52151b217b5d"
	Dec 07 23:37:29 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:29.144916     730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-x7hx7" podStartSLOduration=4.922350406 podStartE2EDuration="11.14485631s" podCreationTimestamp="2025-12-07 23:37:18 +0000 UTC" firstStartedPulling="2025-12-07 23:37:18.687110658 +0000 UTC m=+7.077964335" lastFinishedPulling="2025-12-07 23:37:24.909616573 +0000 UTC m=+13.300470239" observedRunningTime="2025-12-07 23:37:25.792743777 +0000 UTC m=+14.183597466" watchObservedRunningTime="2025-12-07 23:37:29.14485631 +0000 UTC m=+17.535709996"
	Dec 07 23:37:37 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:37.707518     730 scope.go:117] "RemoveContainer" containerID="3efe7df9fe00bad6c4287136d3c2c464b8278703353f2ab4ceeec6f81df30d21"
	Dec 07 23:37:37 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:37.812418     730 scope.go:117] "RemoveContainer" containerID="3efe7df9fe00bad6c4287136d3c2c464b8278703353f2ab4ceeec6f81df30d21"
	Dec 07 23:37:37 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:37.812694     730 scope.go:117] "RemoveContainer" containerID="97a5b2897354b4d5337d92f0bb24a680df6f27de664ccfb0f4e72604947f4e42"
	Dec 07 23:37:37 default-k8s-diff-port-312944 kubelet[730]: E1207 23:37:37.812930     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l2qmt_kubernetes-dashboard(a46c4c27-7f70-49e5-9552-52151b217b5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2qmt" podUID="a46c4c27-7f70-49e5-9552-52151b217b5d"
	Dec 07 23:37:42 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:42.795758     730 scope.go:117] "RemoveContainer" containerID="97a5b2897354b4d5337d92f0bb24a680df6f27de664ccfb0f4e72604947f4e42"
	Dec 07 23:37:42 default-k8s-diff-port-312944 kubelet[730]: E1207 23:37:42.796003     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l2qmt_kubernetes-dashboard(a46c4c27-7f70-49e5-9552-52151b217b5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2qmt" podUID="a46c4c27-7f70-49e5-9552-52151b217b5d"
	Dec 07 23:37:45 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:45.836529     730 scope.go:117] "RemoveContainer" containerID="1141bc53141e8e773858f382cacf8f035e2c792f49fad9bc151a5de36582d819"
	Dec 07 23:37:55 default-k8s-diff-port-312944 kubelet[730]: I1207 23:37:55.707316     730 scope.go:117] "RemoveContainer" containerID="97a5b2897354b4d5337d92f0bb24a680df6f27de664ccfb0f4e72604947f4e42"
	Dec 07 23:37:55 default-k8s-diff-port-312944 kubelet[730]: E1207 23:37:55.707615     730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l2qmt_kubernetes-dashboard(a46c4c27-7f70-49e5-9552-52151b217b5d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2qmt" podUID="a46c4c27-7f70-49e5-9552-52151b217b5d"
	Dec 07 23:38:05 default-k8s-diff-port-312944 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 07 23:38:05 default-k8s-diff-port-312944 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 07 23:38:05 default-k8s-diff-port-312944 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 07 23:38:05 default-k8s-diff-port-312944 systemd[1]: kubelet.service: Consumed 1.817s CPU time.
	
	
	==> kubernetes-dashboard [d0dece358b07ad46edbe28384e450be226ec46d5ce2446c6c96076c671ea49ad] <==
	2025/12/07 23:37:24 Starting overwatch
	2025/12/07 23:37:24 Using namespace: kubernetes-dashboard
	2025/12/07 23:37:24 Using in-cluster config to connect to apiserver
	2025/12/07 23:37:24 Using secret token for csrf signing
	2025/12/07 23:37:24 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/07 23:37:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/07 23:37:25 Successful initial request to the apiserver, version: v1.34.2
	2025/12/07 23:37:25 Generating JWE encryption key
	2025/12/07 23:37:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/07 23:37:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/07 23:37:25 Initializing JWE encryption key from synchronized object
	2025/12/07 23:37:25 Creating in-cluster Sidecar client
	2025/12/07 23:37:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/07 23:37:25 Serving insecurely on HTTP port: 9090
	2025/12/07 23:37:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [058865ddda268775bdf21f4e133779ac38c262c9ded903bf758c68c656ba4b37] <==
	I1207 23:37:45.894150       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 23:37:45.901393       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 23:37:45.901436       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1207 23:37:45.903638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:49.359127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:53.620511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:37:57.219456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:38:00.273023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:38:03.295044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:38:03.299756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 23:38:03.299940       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 23:38:03.300005       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8139ddd6-5276-4d69-8ef0-8cf0f6816009", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-312944_9bd176d2-f9df-496d-8723-a8ee1ef620ac became leader
	I1207 23:38:03.300120       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-312944_9bd176d2-f9df-496d-8723-a8ee1ef620ac!
	W1207 23:38:03.301859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:38:03.305076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1207 23:38:03.400407       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-312944_9bd176d2-f9df-496d-8723-a8ee1ef620ac!
	W1207 23:38:05.310056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:38:05.320683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:38:07.325563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:38:07.330579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:38:09.334043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:38:09.341195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:38:11.344942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1207 23:38:11.348856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [1141bc53141e8e773858f382cacf8f035e2c792f49fad9bc151a5de36582d819] <==
	I1207 23:37:15.095620       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1207 23:37:45.098622       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-312944 -n default-k8s-diff-port-312944
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-312944 -n default-k8s-diff-port-312944: exit status 2 (334.329719ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-312944 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.15s)
E1207 23:39:09.810353  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/old-k8s-version-320477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:39:09.816793  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/old-k8s-version-320477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:39:09.829670  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/old-k8s-version-320477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:39:09.851172  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/old-k8s-version-320477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:39:09.892664  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/old-k8s-version-320477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:39:09.974118  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/old-k8s-version-320477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:39:10.135720  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/old-k8s-version-320477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:39:10.457696  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/old-k8s-version-320477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:39:11.099846  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/old-k8s-version-320477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:39:12.382124  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/old-k8s-version-320477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:39:14.944520  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/old-k8s-version-320477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:39:20.066584  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/old-k8s-version-320477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (351/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 12.26
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.34.2/json-events 9.66
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.24
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 9.78
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.77
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.64
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.42
29 TestDownloadOnlyKic 0.42
30 TestBinaryMirror 0.83
31 TestOffline 59.22
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 100.84
40 TestAddons/serial/GCPAuth/Namespaces 3
41 TestAddons/serial/GCPAuth/FakeCredentials 10.44
57 TestAddons/StoppedEnableDisable 16.84
58 TestCertOptions 25.71
59 TestCertExpiration 214.62
61 TestForceSystemdFlag 26.39
62 TestForceSystemdEnv 38.17
67 TestErrorSpam/setup 22.85
68 TestErrorSpam/start 0.68
69 TestErrorSpam/status 0.96
70 TestErrorSpam/pause 5.9
71 TestErrorSpam/unpause 5.01
72 TestErrorSpam/stop 8.17
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 37.82
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 6.14
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.06
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.86
84 TestFunctional/serial/CacheCmd/cache/add_local 1.97
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.59
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 79.44
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.27
95 TestFunctional/serial/LogsFileCmd 1.26
96 TestFunctional/serial/InvalidService 18.09
98 TestFunctional/parallel/ConfigCmd 0.5
99 TestFunctional/parallel/DashboardCmd 16.8
100 TestFunctional/parallel/DryRun 0.39
101 TestFunctional/parallel/InternationalLanguage 0.17
102 TestFunctional/parallel/StatusCmd 0.99
106 TestFunctional/parallel/ServiceCmdConnect 11.97
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 24.67
110 TestFunctional/parallel/SSHCmd 0.65
111 TestFunctional/parallel/CpCmd 1.82
112 TestFunctional/parallel/MySQL 16.77
113 TestFunctional/parallel/FileSync 0.31
114 TestFunctional/parallel/CertSync 1.72
118 TestFunctional/parallel/NodeLabels 0.07
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
122 TestFunctional/parallel/License 0.39
123 TestFunctional/parallel/ServiceCmd/DeployApp 8.19
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.22
129 TestFunctional/parallel/ServiceCmd/List 0.5
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
132 TestFunctional/parallel/ServiceCmd/Format 0.42
133 TestFunctional/parallel/ServiceCmd/URL 0.42
134 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
135 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 12.52
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
137 TestFunctional/parallel/ProfileCmd/profile_list 0.43
138 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
139 TestFunctional/parallel/MountCmd/any-port 7.92
140 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
141 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
142 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
143 TestFunctional/parallel/Version/short 0.08
144 TestFunctional/parallel/Version/components 0.54
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
149 TestFunctional/parallel/ImageCommands/ImageBuild 6.43
150 TestFunctional/parallel/ImageCommands/Setup 1.8
151 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.13
152 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
153 TestFunctional/parallel/MountCmd/specific-port 1.62
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.78
155 TestFunctional/parallel/MountCmd/VerifyCleanup 1.78
156 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
157 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
158 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.7
162 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.53
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 38.55
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 6.25
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.79
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.92
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.07
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.07
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.3
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.61
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.14
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.13
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.12
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 35.93
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.27
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.28
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.75
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.49
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 9.07
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.44
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.19
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 1.1
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 7.8
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.19
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 28.06
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.62
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.93
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 19.51
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.34
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.98
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.08
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.57
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.18
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 7.19
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.52
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 6.82
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.47
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.45
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.07
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.57
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 1.92
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.24
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.25
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.86
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 4.81
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.81
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.07
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.85
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.74
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.55
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.85
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.59
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.45
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.65
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.45
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.5
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 1.92
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 2.21
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.55
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.44
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.17
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.42
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.15
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.45
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 9.35
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 111.1
266 TestMultiControlPlane/serial/DeployApp 6.48
267 TestMultiControlPlane/serial/PingHostFromPods 1.11
268 TestMultiControlPlane/serial/AddWorkerNode 23.68
269 TestMultiControlPlane/serial/NodeLabels 0.07
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
271 TestMultiControlPlane/serial/CopyFile 17.69
272 TestMultiControlPlane/serial/StopSecondaryNode 17.08
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
274 TestMultiControlPlane/serial/RestartSecondaryNode 14.84
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.89
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 98.75
277 TestMultiControlPlane/serial/DeleteSecondaryNode 7.56
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
279 TestMultiControlPlane/serial/StopCluster 30.14
282 TestMultiControlPlane/serial/AddSecondaryNode 58.72
288 TestJSONOutput/start/Command 40.49
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 8
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.24
313 TestKicCustomNetwork/create_custom_network 31.87
314 TestKicCustomNetwork/use_default_bridge_network 24.06
315 TestKicExistingNetwork 22.98
316 TestKicCustomSubnet 22.81
317 TestKicStaticIP 24.55
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 51.96
322 TestMountStart/serial/StartWithMountFirst 7.95
323 TestMountStart/serial/VerifyMountFirst 0.27
324 TestMountStart/serial/StartWithMountSecond 4.73
325 TestMountStart/serial/VerifyMountSecond 0.27
326 TestMountStart/serial/DeleteFirst 1.7
327 TestMountStart/serial/VerifyMountPostDelete 0.27
328 TestMountStart/serial/Stop 1.27
329 TestMountStart/serial/RestartStopped 8.1
330 TestMountStart/serial/VerifyMountPostStop 0.28
333 TestMultiNode/serial/FreshStart2Nodes 62.59
334 TestMultiNode/serial/DeployApp2Nodes 4.5
335 TestMultiNode/serial/PingHostFrom2Pods 0.76
336 TestMultiNode/serial/AddNode 56.23
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.67
339 TestMultiNode/serial/CopyFile 10.07
340 TestMultiNode/serial/StopNode 2.28
341 TestMultiNode/serial/StartAfterStop 7.22
342 TestMultiNode/serial/RestartKeepsNodes 77.18
343 TestMultiNode/serial/DeleteNode 5.3
344 TestMultiNode/serial/StopMultiNode 28.57
345 TestMultiNode/serial/RestartMultiNode 44.57
346 TestMultiNode/serial/ValidateNameConflict 22.76
353 TestScheduledStopUnix 99.94
356 TestInsufficientStorage 9.07
357 TestRunningBinaryUpgrade 50.15
359 TestKubernetesUpgrade 294.85
360 TestMissingContainerUpgrade 80.33
361 TestStoppedBinaryUpgrade/Setup 3.3
363 TestPause/serial/Start 56.42
364 TestStoppedBinaryUpgrade/Upgrade 325.41
365 TestPause/serial/SecondStartNoReconfiguration 6.32
375 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
376 TestNoKubernetes/serial/StartWithK8s 19.95
377 TestNoKubernetes/serial/StartWithStopK8s 23.03
378 TestNoKubernetes/serial/Start 3.98
379 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
380 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
381 TestNoKubernetes/serial/ProfileList 16.28
382 TestNoKubernetes/serial/Stop 1.29
383 TestNoKubernetes/serial/StartNoArgs 6.97
384 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
392 TestNetworkPlugins/group/false 3.76
397 TestStartStop/group/old-k8s-version/serial/FirstStart 51.2
399 TestStartStop/group/no-preload/serial/FirstStart 45.57
400 TestStartStop/group/old-k8s-version/serial/DeployApp 9.27
402 TestStartStop/group/old-k8s-version/serial/Stop 16.1
403 TestStoppedBinaryUpgrade/MinikubeLogs 1.14
404 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
405 TestStartStop/group/old-k8s-version/serial/SecondStart 49.05
407 TestStartStop/group/embed-certs/serial/FirstStart 68.63
408 TestStartStop/group/no-preload/serial/DeployApp 9.25
410 TestStartStop/group/no-preload/serial/Stop 18.91
411 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
412 TestStartStop/group/no-preload/serial/SecondStart 46.87
413 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
414 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
415 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
418 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 45.15
419 TestStartStop/group/embed-certs/serial/DeployApp 10.26
421 TestStartStop/group/newest-cni/serial/FirstStart 21.51
422 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
424 TestStartStop/group/embed-certs/serial/Stop 16.81
425 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
426 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.36
428 TestStartStop/group/newest-cni/serial/DeployApp 0
430 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.35
431 TestStartStop/group/embed-certs/serial/SecondStart 54.35
432 TestNetworkPlugins/group/auto/Start 46.08
433 TestStartStop/group/newest-cni/serial/Stop 8.19
434 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
435 TestStartStop/group/newest-cni/serial/SecondStart 12.44
436 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.26
437 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
438 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
439 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
442 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.2
443 TestNetworkPlugins/group/kindnet/Start 68.88
444 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
445 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.68
446 TestNetworkPlugins/group/auto/KubeletFlags 0.31
447 TestNetworkPlugins/group/auto/NetCatPod 8.19
448 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
449 TestNetworkPlugins/group/auto/DNS 0.14
450 TestNetworkPlugins/group/auto/Localhost 0.12
451 TestNetworkPlugins/group/auto/HairPin 0.11
452 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
453 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
455 TestNetworkPlugins/group/calico/Start 51.49
456 TestNetworkPlugins/group/custom-flannel/Start 46.54
457 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
458 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
459 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
460 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.33
462 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
463 TestNetworkPlugins/group/kindnet/NetCatPod 8.27
464 TestNetworkPlugins/group/kindnet/DNS 0.15
465 TestNetworkPlugins/group/kindnet/Localhost 0.13
466 TestNetworkPlugins/group/kindnet/HairPin 0.12
467 TestNetworkPlugins/group/enable-default-cni/Start 72.5
468 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
469 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.28
470 TestNetworkPlugins/group/calico/ControllerPod 6.01
471 TestNetworkPlugins/group/custom-flannel/DNS 0.13
472 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
473 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
474 TestNetworkPlugins/group/calico/KubeletFlags 0.34
475 TestNetworkPlugins/group/calico/NetCatPod 9.2
476 TestNetworkPlugins/group/flannel/Start 47.99
477 TestNetworkPlugins/group/calico/DNS 0.13
478 TestNetworkPlugins/group/calico/Localhost 0.1
479 TestNetworkPlugins/group/calico/HairPin 0.11
480 TestNetworkPlugins/group/bridge/Start 36.53
481 TestNetworkPlugins/group/flannel/ControllerPod 6.01
482 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
483 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.18
484 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
485 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
486 TestNetworkPlugins/group/bridge/NetCatPod 9.19
487 TestNetworkPlugins/group/flannel/NetCatPod 10.19
488 TestNetworkPlugins/group/enable-default-cni/DNS 0.11
489 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
490 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
491 TestNetworkPlugins/group/bridge/DNS 0.12
492 TestNetworkPlugins/group/bridge/Localhost 0.1
493 TestNetworkPlugins/group/bridge/HairPin 0.09
494 TestNetworkPlugins/group/flannel/DNS 0.13
495 TestNetworkPlugins/group/flannel/Localhost 0.09
496 TestNetworkPlugins/group/flannel/HairPin 0.09
x
+
TestDownloadOnly/v1.28.0/json-events (12.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-210257 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-210257 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.259823518s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (12.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1207 22:54:43.322696  393125 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1207 22:54:43.322792  393125 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-210257
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-210257: exit status 85 (76.59441ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-210257 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-210257 │ jenkins │ v1.37.0 │ 07 Dec 25 22:54 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:54:31
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:54:31.118669  393137 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:54:31.118779  393137 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:54:31.118791  393137 out.go:374] Setting ErrFile to fd 2...
	I1207 22:54:31.118797  393137 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:54:31.119055  393137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	W1207 22:54:31.119244  393137 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22054-389542/.minikube/config/config.json: open /home/jenkins/minikube-integration/22054-389542/.minikube/config/config.json: no such file or directory
	I1207 22:54:31.119781  393137 out.go:368] Setting JSON to true
	I1207 22:54:31.120892  393137 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5815,"bootTime":1765142256,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:54:31.120954  393137 start.go:143] virtualization: kvm guest
	I1207 22:54:31.124745  393137 out.go:99] [download-only-210257] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1207 22:54:31.124908  393137 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball: no such file or directory
	I1207 22:54:31.124972  393137 notify.go:221] Checking for updates...
	I1207 22:54:31.126374  393137 out.go:171] MINIKUBE_LOCATION=22054
	I1207 22:54:31.127787  393137 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:54:31.129542  393137 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 22:54:31.131195  393137 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 22:54:31.132839  393137 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1207 22:54:31.135346  393137 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1207 22:54:31.135697  393137 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:54:31.160514  393137 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:54:31.160635  393137 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:54:31.223453  393137 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-07 22:54:31.213024072 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:54:31.223623  393137 docker.go:319] overlay module found
	I1207 22:54:31.225353  393137 out.go:99] Using the docker driver based on user configuration
	I1207 22:54:31.225389  393137 start.go:309] selected driver: docker
	I1207 22:54:31.225397  393137 start.go:927] validating driver "docker" against <nil>
	I1207 22:54:31.225504  393137 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:54:31.281235  393137 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-07 22:54:31.27164068 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:54:31.281460  393137 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1207 22:54:31.281987  393137 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1207 22:54:31.282121  393137 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1207 22:54:31.283880  393137 out.go:171] Using Docker driver with root privileges
	I1207 22:54:31.285075  393137 cni.go:84] Creating CNI manager for ""
	I1207 22:54:31.285139  393137 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 22:54:31.285150  393137 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1207 22:54:31.285218  393137 start.go:353] cluster config:
	{Name:download-only-210257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-210257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:54:31.286479  393137 out.go:99] Starting "download-only-210257" primary control-plane node in "download-only-210257" cluster
	I1207 22:54:31.286499  393137 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 22:54:31.287493  393137 out.go:99] Pulling base image v0.0.48-1764843390-22032 ...
	I1207 22:54:31.287530  393137 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1207 22:54:31.287648  393137 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 22:54:31.305835  393137 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1207 22:54:31.306043  393137 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1207 22:54:31.306133  393137 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1207 22:54:31.620990  393137 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1207 22:54:31.621024  393137 cache.go:65] Caching tarball of preloaded images
	I1207 22:54:31.621222  393137 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1207 22:54:31.623307  393137 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1207 22:54:31.623352  393137 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1207 22:54:31.720815  393137 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1207 22:54:31.720991  393137 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1207 22:54:36.945862  393137 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	
	
	* The control-plane node download-only-210257 host does not exist
	  To start a cluster, run: "minikube start -p download-only-210257"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-210257
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (9.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-780730 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-780730 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.656705309s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (9.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1207 22:54:53.455969  393125 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1207 22:54:53.456030  393125 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-780730
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-780730: exit status 85 (78.880538ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-210257 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-210257 │ jenkins │ v1.37.0 │ 07 Dec 25 22:54 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 07 Dec 25 22:54 UTC │ 07 Dec 25 22:54 UTC │
	│ delete  │ -p download-only-210257                                                                                                                                                   │ download-only-210257 │ jenkins │ v1.37.0 │ 07 Dec 25 22:54 UTC │ 07 Dec 25 22:54 UTC │
	│ start   │ -o=json --download-only -p download-only-780730 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-780730 │ jenkins │ v1.37.0 │ 07 Dec 25 22:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:54:43
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:54:43.854339  393514 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:54:43.854630  393514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:54:43.854641  393514 out.go:374] Setting ErrFile to fd 2...
	I1207 22:54:43.854645  393514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:54:43.854865  393514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 22:54:43.855345  393514 out.go:368] Setting JSON to true
	I1207 22:54:43.856245  393514 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5828,"bootTime":1765142256,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:54:43.856310  393514 start.go:143] virtualization: kvm guest
	I1207 22:54:43.858563  393514 out.go:99] [download-only-780730] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:54:43.858808  393514 notify.go:221] Checking for updates...
	I1207 22:54:43.860289  393514 out.go:171] MINIKUBE_LOCATION=22054
	I1207 22:54:43.861803  393514 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:54:43.863337  393514 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 22:54:43.864664  393514 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 22:54:43.866239  393514 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1207 22:54:43.868764  393514 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1207 22:54:43.869092  393514 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:54:43.892959  393514 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:54:43.893040  393514 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:54:43.946796  393514 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-07 22:54:43.936886315 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:54:43.946910  393514 docker.go:319] overlay module found
	I1207 22:54:43.948953  393514 out.go:99] Using the docker driver based on user configuration
	I1207 22:54:43.948999  393514 start.go:309] selected driver: docker
	I1207 22:54:43.949006  393514 start.go:927] validating driver "docker" against <nil>
	I1207 22:54:43.949110  393514 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:54:44.008120  393514 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-07 22:54:43.998227277 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:54:44.008304  393514 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1207 22:54:44.009077  393514 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1207 22:54:44.009287  393514 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1207 22:54:44.011520  393514 out.go:171] Using Docker driver with root privileges
	I1207 22:54:44.012972  393514 cni.go:84] Creating CNI manager for ""
	I1207 22:54:44.013044  393514 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 22:54:44.013056  393514 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1207 22:54:44.013139  393514 start.go:353] cluster config:
	{Name:download-only-780730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-780730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:54:44.014713  393514 out.go:99] Starting "download-only-780730" primary control-plane node in "download-only-780730" cluster
	I1207 22:54:44.014740  393514 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 22:54:44.016147  393514 out.go:99] Pulling base image v0.0.48-1764843390-22032 ...
	I1207 22:54:44.016199  393514 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 22:54:44.016371  393514 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 22:54:44.034723  393514 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1207 22:54:44.034891  393514 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1207 22:54:44.034909  393514 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1207 22:54:44.034919  393514 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1207 22:54:44.034930  393514 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1207 22:54:44.357041  393514 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1207 22:54:44.357070  393514 cache.go:65] Caching tarball of preloaded images
	I1207 22:54:44.357262  393514 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1207 22:54:44.358996  393514 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1207 22:54:44.359015  393514 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1207 22:54:44.465480  393514 preload.go:295] Got checksum from GCS API "40ac2ac600e3e4b9dc7a3f8c6cb2ed91"
	I1207 22:54:44.465540  393514 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:40ac2ac600e3e4b9dc7a3f8c6cb2ed91 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-780730 host does not exist
	  To start a cluster, run: "minikube start -p download-only-780730"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-780730
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (9.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-853065 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-853065 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.778345148s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (9.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1207 22:55:03.704759  393125 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1207 22:55:03.704805  393125 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-853065
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-853065: exit status 85 (767.152246ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-210257 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-210257 │ jenkins │ v1.37.0 │ 07 Dec 25 22:54 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 07 Dec 25 22:54 UTC │ 07 Dec 25 22:54 UTC │
	│ delete  │ -p download-only-210257                                                                                                                                                          │ download-only-210257 │ jenkins │ v1.37.0 │ 07 Dec 25 22:54 UTC │ 07 Dec 25 22:54 UTC │
	│ start   │ -o=json --download-only -p download-only-780730 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-780730 │ jenkins │ v1.37.0 │ 07 Dec 25 22:54 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 07 Dec 25 22:54 UTC │ 07 Dec 25 22:54 UTC │
	│ delete  │ -p download-only-780730                                                                                                                                                          │ download-only-780730 │ jenkins │ v1.37.0 │ 07 Dec 25 22:54 UTC │ 07 Dec 25 22:54 UTC │
	│ start   │ -o=json --download-only -p download-only-853065 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-853065 │ jenkins │ v1.37.0 │ 07 Dec 25 22:54 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/07 22:54:53
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 22:54:53.982764  393881 out.go:360] Setting OutFile to fd 1 ...
	I1207 22:54:53.983004  393881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:54:53.983012  393881 out.go:374] Setting ErrFile to fd 2...
	I1207 22:54:53.983016  393881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 22:54:53.983217  393881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 22:54:53.983725  393881 out.go:368] Setting JSON to true
	I1207 22:54:53.984638  393881 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5838,"bootTime":1765142256,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 22:54:53.984704  393881 start.go:143] virtualization: kvm guest
	I1207 22:54:53.986682  393881 out.go:99] [download-only-853065] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 22:54:53.986874  393881 notify.go:221] Checking for updates...
	I1207 22:54:53.988136  393881 out.go:171] MINIKUBE_LOCATION=22054
	I1207 22:54:53.989488  393881 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 22:54:53.990740  393881 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 22:54:53.991969  393881 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 22:54:53.993247  393881 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1207 22:54:53.996060  393881 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1207 22:54:53.996382  393881 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 22:54:54.023117  393881 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 22:54:54.023228  393881 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:54:54.077252  393881 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-07 22:54:54.067471889 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:54:54.077394  393881 docker.go:319] overlay module found
	I1207 22:54:54.079484  393881 out.go:99] Using the docker driver based on user configuration
	I1207 22:54:54.079524  393881 start.go:309] selected driver: docker
	I1207 22:54:54.079532  393881 start.go:927] validating driver "docker" against <nil>
	I1207 22:54:54.079652  393881 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 22:54:54.136781  393881 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-07 22:54:54.127030977 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 22:54:54.136955  393881 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1207 22:54:54.137577  393881 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1207 22:54:54.137754  393881 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1207 22:54:54.139601  393881 out.go:171] Using Docker driver with root privileges
	I1207 22:54:54.140914  393881 cni.go:84] Creating CNI manager for ""
	I1207 22:54:54.140986  393881 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1207 22:54:54.141000  393881 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1207 22:54:54.141085  393881 start.go:353] cluster config:
	{Name:download-only-853065 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-853065 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 22:54:54.142486  393881 out.go:99] Starting "download-only-853065" primary control-plane node in "download-only-853065" cluster
	I1207 22:54:54.142504  393881 cache.go:134] Beginning downloading kic base image for docker with crio
	I1207 22:54:54.143832  393881 out.go:99] Pulling base image v0.0.48-1764843390-22032 ...
	I1207 22:54:54.143872  393881 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1207 22:54:54.143999  393881 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1207 22:54:54.161486  393881 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1207 22:54:54.161678  393881 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1207 22:54:54.161699  393881 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1207 22:54:54.161708  393881 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1207 22:54:54.161719  393881 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1207 22:54:54.489158  393881 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1207 22:54:54.489191  393881 cache.go:65] Caching tarball of preloaded images
	I1207 22:54:54.489421  393881 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1207 22:54:54.491319  393881 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1207 22:54:54.491378  393881 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1207 22:54:54.584876  393881 preload.go:295] Got checksum from GCS API "b4861df7675d96066744278d08e2cd35"
	I1207 22:54:54.584929  393881 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:b4861df7675d96066744278d08e2cd35 -> /home/jenkins/minikube-integration/22054-389542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-853065 host does not exist
	  To start a cluster, run: "minikube start -p download-only-853065"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-853065
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.42s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-798136 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-798136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-798136
--- PASS: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestBinaryMirror (0.83s)

                                                
                                                
=== RUN   TestBinaryMirror
I1207 22:55:06.922685  393125 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-074233 --alsologtostderr --binary-mirror http://127.0.0.1:45187 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-074233" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-074233
--- PASS: TestBinaryMirror (0.83s)

                                                
                                    
x
+
TestOffline (59.22s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-504484 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-504484 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (56.617636431s)
helpers_test.go:175: Cleaning up "offline-crio-504484" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-504484
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-504484: (2.606996848s)
--- PASS: TestOffline (59.22s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-746247
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-746247: exit status 85 (67.344392ms)

                                                
                                                
-- stdout --
	* Profile "addons-746247" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-746247"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-746247
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-746247: exit status 85 (68.082108ms)

                                                
                                                
-- stdout --
	* Profile "addons-746247" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-746247"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (100.84s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-746247 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-746247 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m40.842756716s)
--- PASS: TestAddons/Setup (100.84s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (3s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-746247 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-746247 get secret gcp-auth -n new-namespace
addons_test.go:644: (dbg) Non-zero exit: kubectl --context addons-746247 get secret gcp-auth -n new-namespace: exit status 1 (118.656382ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:636: (dbg) Run:  kubectl --context addons-746247 logs -l app=gcp-auth -n gcp-auth
I1207 22:56:49.414121  393125 retry.go:31] will retry after 2.604224796s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2025/12/07 22:56:42 GCP Auth Webhook started!
	2025/12/07 22:56:49 Ready to marshal response ...
	2025/12/07 22:56:49 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:644: (dbg) Run:  kubectl --context addons-746247 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (3.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-746247 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-746247 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5e12dbfd-83fd-46c1-9d58-5e26d50cf46f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5e12dbfd-83fd-46c1-9d58-5e26d50cf46f] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003913454s
addons_test.go:694: (dbg) Run:  kubectl --context addons-746247 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-746247 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-746247 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.44s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.84s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-746247
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-746247: (16.532215056s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-746247
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-746247
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-746247
--- PASS: TestAddons/StoppedEnableDisable (16.84s)

                                                
                                    
x
+
TestCertOptions (25.71s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-185778 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-185778 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (22.579217641s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-185778 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-185778 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-185778 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-185778" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-185778
E1207 23:31:52.262035  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-185778: (2.435810304s)
--- PASS: TestCertOptions (25.71s)

                                                
                                    
x
+
TestCertExpiration (214.62s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-612608 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-612608 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (26.627234046s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-612608 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-612608 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.156012981s)
helpers_test.go:175: Cleaning up "cert-expiration-612608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-612608
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-612608: (2.838649597s)
--- PASS: TestCertExpiration (214.62s)

                                                
                                    
x
+
TestForceSystemdFlag (26.39s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-728453 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1207 23:30:38.533945  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-728453 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.626197819s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-728453 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-728453" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-728453
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-728453: (2.467291329s)
--- PASS: TestForceSystemdFlag (26.39s)

                                                
                                    
x
+
TestForceSystemdEnv (38.17s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-599541 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-599541 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.559065038s)
helpers_test.go:175: Cleaning up "force-systemd-env-599541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-599541
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-599541: (2.612968855s)
--- PASS: TestForceSystemdEnv (38.17s)

                                                
                                    
x
+
TestErrorSpam/setup (22.85s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-195000 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-195000 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-195000 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-195000 --driver=docker  --container-runtime=crio: (22.854238398s)
--- PASS: TestErrorSpam/setup (22.85s)

                                                
                                    
x
+
TestErrorSpam/start (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 start --dry-run
--- PASS: TestErrorSpam/start (0.68s)

                                                
                                    
x
+
TestErrorSpam/status (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 status
--- PASS: TestErrorSpam/status (0.96s)

                                                
                                    
x
+
TestErrorSpam/pause (5.9s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 pause: exit status 80 (1.983253457s)

                                                
                                                
-- stdout --
	* Pausing node nospam-195000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:00:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 pause: exit status 80 (2.242889376s)

                                                
                                                
-- stdout --
	* Pausing node nospam-195000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:00:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 pause: exit status 80 (1.677407839s)

                                                
                                                
-- stdout --
	* Pausing node nospam-195000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:00:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.90s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.01s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 unpause: exit status 80 (1.870415214s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-195000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:00:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 unpause: exit status 80 (1.428481708s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-195000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:00:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 unpause: exit status 80 (1.70562333s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-195000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-07T23:00:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.01s)

                                                
                                    
x
+
TestErrorSpam/stop (8.17s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 stop: (7.951462479s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-195000 --log_dir /tmp/nospam-195000 stop
--- PASS: TestErrorSpam/stop (8.17s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/test/nested/copy/393125/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.82s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-826110 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-826110 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (37.822251263s)
--- PASS: TestFunctional/serial/StartWithProxy (37.82s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.14s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1207 23:01:24.635166  393125 config.go:182] Loaded profile config "functional-826110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-826110 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-826110 --alsologtostderr -v=8: (6.141662229s)
functional_test.go:678: soft start took 6.14241736s for "functional-826110" cluster.
I1207 23:01:30.778294  393125 config.go:182] Loaded profile config "functional-826110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (6.14s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-826110 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-826110 cache add registry.k8s.io/pause:3.3: (1.05051699s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-826110 /tmp/TestFunctionalserialCacheCmdcacheadd_local1112008844/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 cache add minikube-local-cache-test:functional-826110
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-826110 cache add minikube-local-cache-test:functional-826110: (1.611767595s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 cache delete minikube-local-cache-test:functional-826110
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-826110
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-826110 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (295.076471ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 kubectl -- --context functional-826110 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-826110 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (79.44s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-826110 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1207 23:01:52.270512  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:01:52.276904  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:01:52.288309  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:01:52.309793  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:01:52.351239  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:01:52.432694  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:01:52.594216  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:01:52.915913  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:01:53.558017  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:01:54.839615  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:01:57.401806  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:02:02.523434  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:02:12.765128  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:02:33.247148  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-826110 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m19.438840638s)
functional_test.go:776: restart took 1m19.438963794s for "functional-826110" cluster.
I1207 23:02:57.532137  393125 config.go:182] Loaded profile config "functional-826110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (79.44s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-826110 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-826110 logs: (1.268801665s)
--- PASS: TestFunctional/serial/LogsCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 logs --file /tmp/TestFunctionalserialLogsFileCmd2061524805/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-826110 logs --file /tmp/TestFunctionalserialLogsFileCmd2061524805/001/logs.txt: (1.258429527s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (18.09s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-826110 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-826110
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-826110: exit status 115 (346.963977ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31056 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-826110 delete -f testdata/invalidsvc.yaml
E1207 23:03:14.209072  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2332: (dbg) Done: kubectl --context functional-826110 delete -f testdata/invalidsvc.yaml: (14.567624549s)
--- PASS: TestFunctional/serial/InvalidService (18.09s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-826110 config get cpus: exit status 14 (97.572235ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-826110 config get cpus: exit status 14 (88.893771ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-826110 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-826110 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 433324: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.80s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-826110 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-826110 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (163.5563ms)

                                                
                                                
-- stdout --
	* [functional-826110] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:03:33.939873  429662 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:03:33.940133  429662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:03:33.940143  429662 out.go:374] Setting ErrFile to fd 2...
	I1207 23:03:33.940147  429662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:03:33.940362  429662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:03:33.940817  429662 out.go:368] Setting JSON to false
	I1207 23:03:33.941816  429662 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6358,"bootTime":1765142256,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:03:33.941879  429662 start.go:143] virtualization: kvm guest
	I1207 23:03:33.944280  429662 out.go:179] * [functional-826110] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:03:33.945786  429662 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:03:33.945803  429662 notify.go:221] Checking for updates...
	I1207 23:03:33.948474  429662 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:03:33.949793  429662 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:03:33.951137  429662 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:03:33.952366  429662 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:03:33.953495  429662 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:03:33.955182  429662 config.go:182] Loaded profile config "functional-826110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:03:33.955773  429662 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:03:33.978732  429662 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:03:33.978924  429662 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:03:34.033687  429662 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-07 23:03:34.02260293 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:03:34.033798  429662 docker.go:319] overlay module found
	I1207 23:03:34.035633  429662 out.go:179] * Using the docker driver based on existing profile
	I1207 23:03:34.036715  429662 start.go:309] selected driver: docker
	I1207 23:03:34.036728  429662 start.go:927] validating driver "docker" against &{Name:functional-826110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-826110 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:03:34.036850  429662 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:03:34.038565  429662 out.go:203] 
	W1207 23:03:34.039875  429662 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1207 23:03:34.040904  429662 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-826110 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-826110 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-826110 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (168.023175ms)

                                                
                                                
-- stdout --
	* [functional-826110] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:03:34.329961  429885 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:03:34.330049  429885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:03:34.330053  429885 out.go:374] Setting ErrFile to fd 2...
	I1207 23:03:34.330057  429885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:03:34.330404  429885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:03:34.330843  429885 out.go:368] Setting JSON to false
	I1207 23:03:34.331907  429885 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6358,"bootTime":1765142256,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:03:34.331971  429885 start.go:143] virtualization: kvm guest
	I1207 23:03:34.334914  429885 out.go:179] * [functional-826110] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1207 23:03:34.336200  429885 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:03:34.336213  429885 notify.go:221] Checking for updates...
	I1207 23:03:34.338344  429885 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:03:34.339625  429885 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:03:34.340845  429885 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:03:34.342043  429885 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:03:34.343294  429885 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:03:34.345237  429885 config.go:182] Loaded profile config "functional-826110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:03:34.345931  429885 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:03:34.370371  429885 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:03:34.370507  429885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:03:34.423847  429885 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-07 23:03:34.414651347 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:03:34.423957  429885 docker.go:319] overlay module found
	I1207 23:03:34.425783  429885 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1207 23:03:34.426937  429885 start.go:309] selected driver: docker
	I1207 23:03:34.426955  429885 start.go:927] validating driver "docker" against &{Name:functional-826110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-826110 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:03:34.427081  429885 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:03:34.429143  429885 out.go:203] 
	W1207 23:03:34.430550  429885 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1207 23:03:34.431884  429885 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-826110 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-826110 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-plrhn" [45a41b71-fe8e-43c0-a673-8604d02e9b48] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-plrhn" [45a41b71-fe8e-43c0-a673-8604d02e9b48] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003890538s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30486
functional_test.go:1666: error fetching http://192.168.49.2:30486: Get "http://192.168.49.2:30486": dial tcp 192.168.49.2:30486: connect: connection refused
I1207 23:03:30.721358  393125 retry.go:31] will retry after 1.441228478s: Get "http://192.168.49.2:30486": dial tcp 192.168.49.2:30486: connect: connection refused
functional_test.go:1680: http://192.168.49.2:30486: success! body:
Request served by hello-node-connect-7d85dfc575-plrhn

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30486
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.97s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [7610449c-6c9c-4875-975e-34dde65c0abf] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003513778s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-826110 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-826110 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-826110 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-826110 apply -f testdata/storage-provisioner/pod.yaml
I1207 23:03:25.251406  393125 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [6ab12038-ee07-4f90-b30d-96d70a69a27a] Pending
helpers_test.go:352: "sp-pod" [6ab12038-ee07-4f90-b30d-96d70a69a27a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [6ab12038-ee07-4f90-b30d-96d70a69a27a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003816845s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-826110 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-826110 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-826110 apply -f testdata/storage-provisioner/pod.yaml
I1207 23:03:36.408979  393125 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [174d9fd0-a345-4f02-a6e2-52802af04e27] Pending
helpers_test.go:352: "sp-pod" [174d9fd0-a345-4f02-a6e2-52802af04e27] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [174d9fd0-a345-4f02-a6e2-52802af04e27] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004501634s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-826110 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.67s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh -n functional-826110 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 cp functional-826110:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2556277159/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh -n functional-826110 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh -n functional-826110 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (16.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-826110 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-f8p47" [188c6875-d5c5-43d4-9bf5-66ffcc3f9a39] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-f8p47" [188c6875-d5c5-43d4-9bf5-66ffcc3f9a39] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 14.003676323s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-826110 exec mysql-5bb876957f-f8p47 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-826110 exec mysql-5bb876957f-f8p47 -- mysql -ppassword -e "show databases;": exit status 1 (90.796893ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1207 23:03:56.946740  393125 retry.go:31] will retry after 1.312316355s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-826110 exec mysql-5bb876957f-f8p47 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-826110 exec mysql-5bb876957f-f8p47 -- mysql -ppassword -e "show databases;": exit status 1 (97.430729ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1207 23:03:58.357652  393125 retry.go:31] will retry after 992.041233ms: exit status 1
2025/12/07 23:03:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1812: (dbg) Run:  kubectl --context functional-826110 exec mysql-5bb876957f-f8p47 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (16.77s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/393125/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh "sudo cat /etc/test/nested/copy/393125/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/393125.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh "sudo cat /etc/ssl/certs/393125.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/393125.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh "sudo cat /usr/share/ca-certificates/393125.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3931252.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh "sudo cat /etc/ssl/certs/3931252.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3931252.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh "sudo cat /usr/share/ca-certificates/3931252.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-826110 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-826110 ssh "sudo systemctl is-active docker": exit status 1 (277.307032ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-826110 ssh "sudo systemctl is-active containerd": exit status 1 (277.342317ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-826110 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-826110 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-ddznn" [65a793c4-53ff-4433-8a8c-035f898df57d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-ddznn" [65a793c4-53ff-4433-8a8c-035f898df57d] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003623447s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-826110 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-826110 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-826110 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 425807: os: process already finished
helpers_test.go:519: unable to terminate pid 425495: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-826110 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-826110 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-826110 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [0caee263-a763-443f-8b56-5573f3b7990b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [0caee263-a763-443f-8b56-5573f3b7990b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003008551s
I1207 23:03:29.372064  393125 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 service list -o json
functional_test.go:1504: Took "553.922129ms" to run "out/minikube-linux-amd64 -p functional-826110 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31853
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31853
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-826110 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (12.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.101.156 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (12.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "357.862376ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "68.742547ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "363.963508ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "62.768463ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-826110 /tmp/TestFunctionalparallelMountCmdany-port638314929/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765148610988175674" to /tmp/TestFunctionalparallelMountCmdany-port638314929/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765148610988175674" to /tmp/TestFunctionalparallelMountCmdany-port638314929/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765148610988175674" to /tmp/TestFunctionalparallelMountCmdany-port638314929/001/test-1765148610988175674
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-826110 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (286.008066ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1207 23:03:31.274526  393125 retry.go:31] will retry after 551.144936ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  7 23:03 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  7 23:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  7 23:03 test-1765148610988175674
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh cat /mount-9p/test-1765148610988175674
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-826110 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [b43137c4-8fac-492c-9ea4-7c5dc018b962] Pending
helpers_test.go:352: "busybox-mount" [b43137c4-8fac-492c-9ea4-7c5dc018b962] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [b43137c4-8fac-492c-9ea4-7c5dc018b962] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [b43137c4-8fac-492c-9ea4-7c5dc018b962] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003937553s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-826110 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-826110 /tmp/TestFunctionalparallelMountCmdany-port638314929/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.92s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-826110 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-826110
localhost/kicbase/echo-server:functional-826110
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-826110 image ls --format short --alsologtostderr:
I1207 23:03:43.791173  433527 out.go:360] Setting OutFile to fd 1 ...
I1207 23:03:43.791534  433527 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:03:43.791547  433527 out.go:374] Setting ErrFile to fd 2...
I1207 23:03:43.791554  433527 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:03:43.791853  433527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
I1207 23:03:43.792700  433527 config.go:182] Loaded profile config "functional-826110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1207 23:03:43.792865  433527 config.go:182] Loaded profile config "functional-826110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1207 23:03:43.793502  433527 cli_runner.go:164] Run: docker container inspect functional-826110 --format={{.State.Status}}
I1207 23:03:43.813961  433527 ssh_runner.go:195] Run: systemctl --version
I1207 23:03:43.814021  433527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-826110
I1207 23:03:43.833196  433527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/functional-826110/id_rsa Username:docker}
I1207 23:03:43.928684  433527 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-826110 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-826110  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-826110  │ b1b14bf70b193 │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-826110 image ls --format table --alsologtostderr:
I1207 23:03:44.276025  433822 out.go:360] Setting OutFile to fd 1 ...
I1207 23:03:44.276144  433822 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:03:44.276155  433822 out.go:374] Setting ErrFile to fd 2...
I1207 23:03:44.276161  433822 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:03:44.276504  433822 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
I1207 23:03:44.277209  433822 config.go:182] Loaded profile config "functional-826110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1207 23:03:44.277309  433822 config.go:182] Loaded profile config "functional-826110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1207 23:03:44.277826  433822 cli_runner.go:164] Run: docker container inspect functional-826110 --format={{.State.Status}}
I1207 23:03:44.297801  433822 ssh_runner.go:195] Run: systemctl --version
I1207 23:03:44.297853  433822 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-826110
I1207 23:03:44.319254  433822 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/functional-826110/id_rsa Username:docker}
I1207 23:03:44.414553  433822 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-826110 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"b1b14bf70b193a5aed856b800b807a8c78add4dc3f7e5d8534ec4e6ad727ef41","repoDigests":["localhost/minikube-local-cache-test@sha256:4c5aef4933d5c01dcfe09fe340fc9ee2a382b5b7716666a40fe8fe9fd32a596e"],"repoTags":["localhost/minikube-local-cache-test:functional-826110"],"size":"
3330"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb
0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha25
6:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/ki
cbase/echo-server:functional-826110"],"size":"4944818"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30
d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a
9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-826110 image ls --format json --alsologtostderr:
I1207 23:03:44.229824  433799 out.go:360] Setting OutFile to fd 1 ...
I1207 23:03:44.230121  433799 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:03:44.230137  433799 out.go:374] Setting ErrFile to fd 2...
I1207 23:03:44.230145  433799 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:03:44.230490  433799 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
I1207 23:03:44.231396  433799 config.go:182] Loaded profile config "functional-826110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1207 23:03:44.231547  433799 config.go:182] Loaded profile config "functional-826110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1207 23:03:44.232186  433799 cli_runner.go:164] Run: docker container inspect functional-826110 --format={{.State.Status}}
I1207 23:03:44.255242  433799 ssh_runner.go:195] Run: systemctl --version
I1207 23:03:44.255288  433799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-826110
I1207 23:03:44.275468  433799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/functional-826110/id_rsa Username:docker}
I1207 23:03:44.371109  433799 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-826110 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: b1b14bf70b193a5aed856b800b807a8c78add4dc3f7e5d8534ec4e6ad727ef41
repoDigests:
- localhost/minikube-local-cache-test@sha256:4c5aef4933d5c01dcfe09fe340fc9ee2a382b5b7716666a40fe8fe9fd32a596e
repoTags:
- localhost/minikube-local-cache-test:functional-826110
size: "3330"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-826110
size: "4944818"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-826110 image ls --format yaml --alsologtostderr:
I1207 23:03:44.030659  433683 out.go:360] Setting OutFile to fd 1 ...
I1207 23:03:44.030903  433683 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:03:44.030914  433683 out.go:374] Setting ErrFile to fd 2...
I1207 23:03:44.030918  433683 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:03:44.031159  433683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
I1207 23:03:44.031806  433683 config.go:182] Loaded profile config "functional-826110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1207 23:03:44.031919  433683 config.go:182] Loaded profile config "functional-826110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1207 23:03:44.032426  433683 cli_runner.go:164] Run: docker container inspect functional-826110 --format={{.State.Status}}
I1207 23:03:44.052683  433683 ssh_runner.go:195] Run: systemctl --version
I1207 23:03:44.052757  433683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-826110
I1207 23:03:44.073112  433683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/functional-826110/id_rsa Username:docker}
I1207 23:03:44.172581  433683 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-826110 ssh pgrep buildkitd: exit status 1 (280.247396ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 image build -t localhost/my-image:functional-826110 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-826110 image build -t localhost/my-image:functional-826110 testdata/build --alsologtostderr: (5.742705099s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-826110 image build -t localhost/my-image:functional-826110 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> cde299991ea
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-826110
--> e42628de42a
Successfully tagged localhost/my-image:functional-826110
e42628de42aa2c74aaf30b644b466650c32fa4349f4d906f65ca7b7fd898db09
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-826110 image build -t localhost/my-image:functional-826110 testdata/build --alsologtostderr:
I1207 23:03:44.748118  434034 out.go:360] Setting OutFile to fd 1 ...
I1207 23:03:44.748425  434034 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:03:44.748437  434034 out.go:374] Setting ErrFile to fd 2...
I1207 23:03:44.748443  434034 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:03:44.748668  434034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
I1207 23:03:44.749272  434034 config.go:182] Loaded profile config "functional-826110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1207 23:03:44.750040  434034 config.go:182] Loaded profile config "functional-826110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1207 23:03:44.750552  434034 cli_runner.go:164] Run: docker container inspect functional-826110 --format={{.State.Status}}
I1207 23:03:44.769507  434034 ssh_runner.go:195] Run: systemctl --version
I1207 23:03:44.769581  434034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-826110
I1207 23:03:44.788817  434034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/functional-826110/id_rsa Username:docker}
I1207 23:03:44.884736  434034 build_images.go:162] Building image from path: /tmp/build.3040202845.tar
I1207 23:03:44.884829  434034 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1207 23:03:44.896652  434034 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3040202845.tar
I1207 23:03:44.903004  434034 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3040202845.tar: stat -c "%s %y" /var/lib/minikube/build/build.3040202845.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3040202845.tar': No such file or directory
I1207 23:03:44.903039  434034 ssh_runner.go:362] scp /tmp/build.3040202845.tar --> /var/lib/minikube/build/build.3040202845.tar (3072 bytes)
I1207 23:03:44.931057  434034 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3040202845
I1207 23:03:44.943669  434034 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3040202845 -xf /var/lib/minikube/build/build.3040202845.tar
I1207 23:03:44.957232  434034 crio.go:315] Building image: /var/lib/minikube/build/build.3040202845
I1207 23:03:44.957343  434034 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-826110 /var/lib/minikube/build/build.3040202845 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1207 23:03:50.400857  434034 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-826110 /var/lib/minikube/build/build.3040202845 --cgroup-manager=cgroupfs: (5.443475508s)
I1207 23:03:50.400946  434034 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3040202845
I1207 23:03:50.409987  434034 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3040202845.tar
I1207 23:03:50.418170  434034 build_images.go:218] Built localhost/my-image:functional-826110 from /tmp/build.3040202845.tar
I1207 23:03:50.418234  434034 build_images.go:134] succeeded building to: functional-826110
I1207 23:03:50.418240  434034 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.773627637s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-826110
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 image load --daemon kicbase/echo-server:functional-826110 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 image load --daemon kicbase/echo-server:functional-826110 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-826110 /tmp/TestFunctionalparallelMountCmdspecific-port1920641579/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-826110 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (297.19722ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1207 23:03:39.200723  393125 retry.go:31] will retry after 269.873213ms: exit status 1
I1207 23:03:39.433196  393125 retry.go:31] will retry after 2.520743483s: Temporary Error: Get "http://10.106.101.156": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-826110 /tmp/TestFunctionalparallelMountCmdspecific-port1920641579/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-826110 ssh "sudo umount -f /mount-9p": exit status 1 (286.917675ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-826110 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-826110 /tmp/TestFunctionalparallelMountCmdspecific-port1920641579/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-826110
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 image load --daemon kicbase/echo-server:functional-826110 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-826110 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2299827472/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-826110 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2299827472/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-826110 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2299827472/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-826110 ssh "findmnt -T" /mount1: exit status 1 (377.810059ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1207 23:03:40.902909  393125 retry.go:31] will retry after 441.156039ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-826110 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-826110 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2299827472/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-826110 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2299827472/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-826110 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2299827472/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 image save kicbase/echo-server:functional-826110 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 image rm kicbase/echo-server:functional-826110 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-826110 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-826110
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-826110 image save --daemon kicbase/echo-server:functional-826110 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-826110
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-826110
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-826110
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-826110
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22054-389542/.minikube/files/etc/test/nested/copy/393125/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (38.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-458242 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1207 23:04:36.133443  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-458242 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (38.546304183s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (38.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1207 23:04:41.544442  393125 config.go:182] Loaded profile config "functional-458242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-458242 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-458242 --alsologtostderr -v=8: (6.251165923s)
functional_test.go:678: soft start took 6.251555574s for "functional-458242" cluster.
I1207 23:04:47.795958  393125 config.go:182] Loaded profile config "functional-458242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-458242 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.79s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.79s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-458242 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach4199535824/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 cache add minikube-local-cache-test:functional-458242
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-458242 cache add minikube-local-cache-test:functional-458242: (1.61298107s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 cache delete minikube-local-cache-test:functional-458242
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-458242
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-458242 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (293.854665ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 kubectl -- --context functional-458242 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-458242 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (35.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-458242 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-458242 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.931729872s)
functional_test.go:776: restart took 35.931867757s for "functional-458242" cluster.
I1207 23:05:30.977976  393125 config.go:182] Loaded profile config "functional-458242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (35.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-458242 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-458242 logs: (1.267961427s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1303207874/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-458242 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1303207874/001/logs.txt: (1.283706733s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-458242 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-458242
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-458242: exit status 115 (349.9816ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31603 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-458242 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-458242 delete -f testdata/invalidsvc.yaml: (1.219713167s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-458242 config get cpus: exit status 14 (89.743314ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-458242 config get cpus: exit status 14 (83.071981ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (9.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-458242 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-458242 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 446591: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (9.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-458242 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-458242 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (197.192506ms)

                                                
                                                
-- stdout --
	* [functional-458242] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:05:40.536165  445808 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:05:40.536292  445808 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:05:40.536301  445808 out.go:374] Setting ErrFile to fd 2...
	I1207 23:05:40.536309  445808 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:05:40.536656  445808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:05:40.537279  445808 out.go:368] Setting JSON to false
	I1207 23:05:40.538307  445808 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6485,"bootTime":1765142256,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:05:40.538407  445808 start.go:143] virtualization: kvm guest
	I1207 23:05:40.540829  445808 out.go:179] * [functional-458242] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:05:40.542149  445808 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:05:40.542179  445808 notify.go:221] Checking for updates...
	I1207 23:05:40.544461  445808 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:05:40.545830  445808 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:05:40.546877  445808 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:05:40.547976  445808 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:05:40.549848  445808 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:05:40.551707  445808 config.go:182] Loaded profile config "functional-458242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:05:40.552561  445808 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:05:40.581089  445808 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:05:40.581203  445808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:05:40.650974  445808 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:57 SystemTime:2025-12-07 23:05:40.63985795 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:05:40.651132  445808 docker.go:319] overlay module found
	I1207 23:05:40.652930  445808 out.go:179] * Using the docker driver based on existing profile
	I1207 23:05:40.654230  445808 start.go:309] selected driver: docker
	I1207 23:05:40.654254  445808 start.go:927] validating driver "docker" against &{Name:functional-458242 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-458242 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:05:40.654456  445808 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:05:40.657587  445808 out.go:203] 
	W1207 23:05:40.659199  445808 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1207 23:05:40.660555  445808 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-458242 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-458242 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-458242 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (191.936649ms)

                                                
                                                
-- stdout --
	* [functional-458242] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:05:40.351292  445618 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:05:40.351592  445618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:05:40.351603  445618 out.go:374] Setting ErrFile to fd 2...
	I1207 23:05:40.351608  445618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:05:40.351942  445618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:05:40.352428  445618 out.go:368] Setting JSON to false
	I1207 23:05:40.353588  445618 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6484,"bootTime":1765142256,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:05:40.353658  445618 start.go:143] virtualization: kvm guest
	I1207 23:05:40.355480  445618 out.go:179] * [functional-458242] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1207 23:05:40.356858  445618 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:05:40.356881  445618 notify.go:221] Checking for updates...
	I1207 23:05:40.360317  445618 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:05:40.361640  445618 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:05:40.362789  445618 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:05:40.363964  445618 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:05:40.365251  445618 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:05:40.367115  445618 config.go:182] Loaded profile config "functional-458242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:05:40.367904  445618 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:05:40.393206  445618 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:05:40.393315  445618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:05:40.454835  445618 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-07 23:05:40.44358371 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:05:40.454971  445618 docker.go:319] overlay module found
	I1207 23:05:40.458245  445618 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1207 23:05:40.459854  445618 start.go:309] selected driver: docker
	I1207 23:05:40.459874  445618 start.go:927] validating driver "docker" against &{Name:functional-458242 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-458242 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1207 23:05:40.459981  445618 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:05:40.462117  445618 out.go:203] 
	W1207 23:05:40.463188  445618 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1207 23:05:40.464146  445618 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (7.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-458242 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-458242 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-9qntt" [0f792fa1-965b-4669-be15-367b633917d1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-9f67c86d4-9qntt" [0f792fa1-965b-4669-be15-367b633917d1] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003823788s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31300
functional_test.go:1680: http://192.168.49.2:31300: success! body:
Request served by hello-node-connect-9f67c86d4-9qntt

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31300
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (7.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (28.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [865495a7-f568-4f2d-bfaf-6b63ece522ed] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003533612s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-458242 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-458242 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-458242 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-458242 apply -f testdata/storage-provisioner/pod.yaml
I1207 23:05:54.975795  393125 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [73619b7a-e733-452d-bd85-03aa2d9104a2] Pending
helpers_test.go:352: "sp-pod" [73619b7a-e733-452d-bd85-03aa2d9104a2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [73619b7a-e733-452d-bd85-03aa2d9104a2] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.003636745s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-458242 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-458242 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-458242 apply -f testdata/storage-provisioner/pod.yaml
I1207 23:06:10.575222  393125 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [dd9ee38e-1c74-4c87-8ba5-89852505c03e] Pending
helpers_test.go:352: "sp-pod" [dd9ee38e-1c74-4c87-8ba5-89852505c03e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [dd9ee38e-1c74-4c87-8ba5-89852505c03e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00308814s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-458242 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (28.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh "echo hello"
2025/12/07 23:05:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh -n functional-458242 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 cp functional-458242:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp4055085536/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh -n functional-458242 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh -n functional-458242 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (19.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-458242 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-844cf969f6-crf9g" [f43a5d42-ae8c-4e74-b71d-beb358262723] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-844cf969f6-crf9g" [f43a5d42-ae8c-4e74-b71d-beb358262723] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 17.003389565s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-458242 exec mysql-844cf969f6-crf9g -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-458242 exec mysql-844cf969f6-crf9g -- mysql -ppassword -e "show databases;": exit status 1 (91.441591ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1207 23:06:07.932015  393125 retry.go:31] will retry after 534.29294ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-458242 exec mysql-844cf969f6-crf9g -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-458242 exec mysql-844cf969f6-crf9g -- mysql -ppassword -e "show databases;": exit status 1 (87.716835ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1207 23:06:08.554594  393125 retry.go:31] will retry after 1.514403433s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-458242 exec mysql-844cf969f6-crf9g -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (19.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/393125/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh "sudo cat /etc/test/nested/copy/393125/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/393125.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh "sudo cat /etc/ssl/certs/393125.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/393125.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh "sudo cat /usr/share/ca-certificates/393125.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3931252.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh "sudo cat /etc/ssl/certs/3931252.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3931252.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh "sudo cat /usr/share/ca-certificates/3931252.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-458242 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-458242 ssh "sudo systemctl is-active docker": exit status 1 (283.607447ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-458242 ssh "sudo systemctl is-active containerd": exit status 1 (284.906793ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (7.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-458242 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-458242 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-86fwc" [7f5c33ff-84a1-4a08-ba24-3a0def7ef9a6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-5758569b79-86fwc" [7f5c33ff-84a1-4a08-ba24-3a0def7ef9a6] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003761782s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (7.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (6.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-458242 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo18280570/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765148738830788745" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo18280570/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765148738830788745" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo18280570/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765148738830788745" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo18280570/001/test-1765148738830788745
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-458242 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (321.517484ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1207 23:05:39.152662  393125 retry.go:31] will retry after 378.655154ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  7 23:05 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  7 23:05 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  7 23:05 test-1765148738830788745
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh cat /mount-9p/test-1765148738830788745
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-458242 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [a0cf7c9c-e1ff-41a9-83e2-9eb4991aef0f] Pending
helpers_test.go:352: "busybox-mount" [a0cf7c9c-e1ff-41a9-83e2-9eb4991aef0f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [a0cf7c9c-e1ff-41a9-83e2-9eb4991aef0f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [a0cf7c9c-e1ff-41a9-83e2-9eb4991aef0f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004275062s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-458242 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-458242 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo18280570/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (6.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "407.765434ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "66.245471ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "370.999145ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "78.444508ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (1.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-458242 image ls --format short --alsologtostderr: (1.922261896s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-458242 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-458242
localhost/kicbase/echo-server:functional-458242
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-458242 image ls --format short --alsologtostderr:
I1207 23:05:58.943583  451961 out.go:360] Setting OutFile to fd 1 ...
I1207 23:05:58.943703  451961 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:05:58.943710  451961 out.go:374] Setting ErrFile to fd 2...
I1207 23:05:58.943717  451961 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:05:58.944011  451961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
I1207 23:05:58.944775  451961 config.go:182] Loaded profile config "functional-458242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1207 23:05:58.944930  451961 config.go:182] Loaded profile config "functional-458242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1207 23:05:58.945583  451961 cli_runner.go:164] Run: docker container inspect functional-458242 --format={{.State.Status}}
I1207 23:05:58.969490  451961 ssh_runner.go:195] Run: systemctl --version
I1207 23:05:58.969584  451961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-458242
I1207 23:05:58.992513  451961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/functional-458242/id_rsa Username:docker}
I1207 23:05:59.097381  451961 ssh_runner.go:195] Run: sudo crictl images --output json
I1207 23:06:00.782123  451961 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.684707788s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (1.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-458242 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ localhost/minikube-local-cache-test     │ functional-458242  │ b1b14bf70b193 │ 3.33kB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-458242  │ 9056ab77afb8e │ 4.95MB │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-458242 image ls --format table --alsologtostderr:
I1207 23:06:01.109926  452293 out.go:360] Setting OutFile to fd 1 ...
I1207 23:06:01.110219  452293 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:06:01.110230  452293 out.go:374] Setting ErrFile to fd 2...
I1207 23:06:01.110234  452293 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:06:01.110463  452293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
I1207 23:06:01.111387  452293 config.go:182] Loaded profile config "functional-458242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1207 23:06:01.111557  452293 config.go:182] Loaded profile config "functional-458242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1207 23:06:01.112966  452293 cli_runner.go:164] Run: docker container inspect functional-458242 --format={{.State.Status}}
I1207 23:06:01.133640  452293 ssh_runner.go:195] Run: systemctl --version
I1207 23:06:01.133698  452293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-458242
I1207 23:06:01.154748  452293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/functional-458242/id_rsa Username:docker}
I1207 23:06:01.250219  452293 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-458242 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8
dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k
8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f91222
91d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76872535"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTag
s":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"b1b14bf70b193a5aed856b800b807a8c78add4dc3f7e5d8534ec4e6ad727ef41","repoDigests":["localhost/minikube-local-cache-test@sha256:4c5aef4933d5c01dcfe09fe340fc9ee2a382b5b7716666a40fe8fe9fd32a596e"],"repoTags":["localhost/minikube-local-cache-test:functional-458242"],"size":"3330"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"
],"size":"63585106"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab7
7afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-458242"],"size":"4945146"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-458242 image ls --format json --alsologtostderr:
I1207 23:06:00.858870  452113 out.go:360] Setting OutFile to fd 1 ...
I1207 23:06:00.859177  452113 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:06:00.859189  452113 out.go:374] Setting ErrFile to fd 2...
I1207 23:06:00.859196  452113 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:06:00.859502  452113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
I1207 23:06:00.860377  452113 config.go:182] Loaded profile config "functional-458242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1207 23:06:00.860553  452113 config.go:182] Loaded profile config "functional-458242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1207 23:06:00.861208  452113 cli_runner.go:164] Run: docker container inspect functional-458242 --format={{.State.Status}}
I1207 23:06:00.883979  452113 ssh_runner.go:195] Run: systemctl --version
I1207 23:06:00.884030  452113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-458242
I1207 23:06:00.908104  452113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/functional-458242/id_rsa Username:docker}
I1207 23:06:01.005812  452113 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-458242 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-458242
size: "4945146"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: b1b14bf70b193a5aed856b800b807a8c78add4dc3f7e5d8534ec4e6ad727ef41
repoDigests:
- localhost/minikube-local-cache-test@sha256:4c5aef4933d5c01dcfe09fe340fc9ee2a382b5b7716666a40fe8fe9fd32a596e
repoTags:
- localhost/minikube-local-cache-test:functional-458242
size: "3330"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-458242 image ls --format yaml --alsologtostderr:
I1207 23:06:00.014755  452025 out.go:360] Setting OutFile to fd 1 ...
I1207 23:06:00.014873  452025 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:06:00.014884  452025 out.go:374] Setting ErrFile to fd 2...
I1207 23:06:00.014890  452025 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:06:00.015191  452025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
I1207 23:06:00.016047  452025 config.go:182] Loaded profile config "functional-458242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1207 23:06:00.016229  452025 config.go:182] Loaded profile config "functional-458242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1207 23:06:00.016910  452025 cli_runner.go:164] Run: docker container inspect functional-458242 --format={{.State.Status}}
I1207 23:06:00.040583  452025 ssh_runner.go:195] Run: systemctl --version
I1207 23:06:00.040648  452025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-458242
I1207 23:06:00.062942  452025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/functional-458242/id_rsa Username:docker}
I1207 23:06:00.166203  452025 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-458242 ssh pgrep buildkitd: exit status 1 (300.470707ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 image build -t localhost/my-image:functional-458242 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-458242 image build -t localhost/my-image:functional-458242 testdata/build --alsologtostderr: (4.270668999s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-458242 image build -t localhost/my-image:functional-458242 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8e876793ede
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-458242
--> 1e12fc4e937
Successfully tagged localhost/my-image:functional-458242
1e12fc4e93715a003f5a070a9b717cb289735026971933c5351ad6c58167a36e
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-458242 image build -t localhost/my-image:functional-458242 testdata/build --alsologtostderr:
I1207 23:06:01.166827  452305 out.go:360] Setting OutFile to fd 1 ...
I1207 23:06:01.167146  452305 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:06:01.167157  452305 out.go:374] Setting ErrFile to fd 2...
I1207 23:06:01.167162  452305 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1207 23:06:01.167383  452305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
I1207 23:06:01.168022  452305 config.go:182] Loaded profile config "functional-458242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1207 23:06:01.168810  452305 config.go:182] Loaded profile config "functional-458242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1207 23:06:01.169472  452305 cli_runner.go:164] Run: docker container inspect functional-458242 --format={{.State.Status}}
I1207 23:06:01.189814  452305 ssh_runner.go:195] Run: systemctl --version
I1207 23:06:01.189862  452305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-458242
I1207 23:06:01.210046  452305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/functional-458242/id_rsa Username:docker}
I1207 23:06:01.304427  452305 build_images.go:162] Building image from path: /tmp/build.942454448.tar
I1207 23:06:01.304495  452305 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1207 23:06:01.313228  452305 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.942454448.tar
I1207 23:06:01.317096  452305 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.942454448.tar: stat -c "%s %y" /var/lib/minikube/build/build.942454448.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.942454448.tar': No such file or directory
I1207 23:06:01.317131  452305 ssh_runner.go:362] scp /tmp/build.942454448.tar --> /var/lib/minikube/build/build.942454448.tar (3072 bytes)
I1207 23:06:01.337083  452305 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.942454448
I1207 23:06:01.346714  452305 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.942454448 -xf /var/lib/minikube/build/build.942454448.tar
I1207 23:06:01.355457  452305 crio.go:315] Building image: /var/lib/minikube/build/build.942454448
I1207 23:06:01.355526  452305 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-458242 /var/lib/minikube/build/build.942454448 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1207 23:06:05.336611  452305 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-458242 /var/lib/minikube/build/build.942454448 --cgroup-manager=cgroupfs: (3.981049808s)
I1207 23:06:05.336671  452305 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.942454448
I1207 23:06:05.345072  452305 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.942454448.tar
I1207 23:06:05.353143  452305 build_images.go:218] Built localhost/my-image:functional-458242 from /tmp/build.942454448.tar
I1207 23:06:05.353177  452305 build_images.go:134] succeeded building to: functional-458242
I1207 23:06:05.353182  452305 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-458242
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 image load --daemon kicbase/echo-server:functional-458242 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 image load --daemon kicbase/echo-server:functional-458242 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-458242
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 image load --daemon kicbase/echo-server:functional-458242 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-458242 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo382739644/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-458242 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (317.367097ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1207 23:05:45.963660  393125 retry.go:31] will retry after 274.28612ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-458242 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo382739644/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-458242 ssh "sudo umount -f /mount-9p": exit status 1 (352.635959ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-458242 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-458242 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo382739644/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 service list -o json
functional_test.go:1504: Took "589.209063ms" to run "out/minikube-linux-amd64 -p functional-458242 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 image save kicbase/echo-server:functional-458242 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 image rm kicbase/echo-server:functional-458242 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30776
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (1.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-458242 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.654421092s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (1.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-458242 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1049545043/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-458242 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1049545043/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-458242 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1049545043/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-458242 ssh "findmnt -T" /mount1: exit status 1 (445.505158ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1207 23:05:47.943158  393125 retry.go:31] will retry after 731.546803ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-458242 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-458242 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1049545043/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-458242 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1049545043/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-458242 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1049545043/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30776
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-458242
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 image save --daemon kicbase/echo-server:functional-458242 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-458242
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-458242 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-458242 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-458242 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-458242 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-458242 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 450791: os: process already finished
helpers_test.go:519: unable to terminate pid 450412: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-458242 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (9.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-458242 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [9ad2a054-e0c5-4d99-8b20-51feab89062f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [9ad2a054-e0c5-4d99-8b20-51feab89062f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.031861794s
I1207 23:05:59.752689  393125 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (9.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-458242 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.108.24 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-458242 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-458242
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-458242
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-458242
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (111.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1207 23:06:52.262461  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:07:19.975579  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-907658 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m50.368482582s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (111.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-907658 kubectl -- rollout status deployment/busybox: (4.378919543s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 kubectl -- exec busybox-7b57f96db7-dslrx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 kubectl -- exec busybox-7b57f96db7-sd5gw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 kubectl -- exec busybox-7b57f96db7-wts8f -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 kubectl -- exec busybox-7b57f96db7-dslrx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 kubectl -- exec busybox-7b57f96db7-sd5gw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 kubectl -- exec busybox-7b57f96db7-wts8f -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 kubectl -- exec busybox-7b57f96db7-dslrx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 kubectl -- exec busybox-7b57f96db7-sd5gw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 kubectl -- exec busybox-7b57f96db7-wts8f -- nslookup kubernetes.default.svc.cluster.local
E1207 23:08:18.418039  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:08:18.424483  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:08:18.435892  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:08:18.457319  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:08:18.498804  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/DeployApp (6.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
E1207 23:08:18.580804  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 kubectl -- exec busybox-7b57f96db7-dslrx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E1207 23:08:18.742651  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 kubectl -- exec busybox-7b57f96db7-dslrx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 kubectl -- exec busybox-7b57f96db7-sd5gw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E1207 23:08:19.064449  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 kubectl -- exec busybox-7b57f96db7-sd5gw -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 kubectl -- exec busybox-7b57f96db7-wts8f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 kubectl -- exec busybox-7b57f96db7-wts8f -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 node add --alsologtostderr -v 5
E1207 23:08:19.705931  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:08:20.987675  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:08:23.549307  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:08:28.671109  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:08:38.913447  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-907658 node add --alsologtostderr -v 5: (22.778039996s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-907658 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 cp testdata/cp-test.txt ha-907658:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 cp ha-907658:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2786965912/001/cp-test_ha-907658.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 cp ha-907658:/home/docker/cp-test.txt ha-907658-m02:/home/docker/cp-test_ha-907658_ha-907658-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m02 "sudo cat /home/docker/cp-test_ha-907658_ha-907658-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 cp ha-907658:/home/docker/cp-test.txt ha-907658-m03:/home/docker/cp-test_ha-907658_ha-907658-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m03 "sudo cat /home/docker/cp-test_ha-907658_ha-907658-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 cp ha-907658:/home/docker/cp-test.txt ha-907658-m04:/home/docker/cp-test_ha-907658_ha-907658-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m04 "sudo cat /home/docker/cp-test_ha-907658_ha-907658-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 cp testdata/cp-test.txt ha-907658-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 cp ha-907658-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2786965912/001/cp-test_ha-907658-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 cp ha-907658-m02:/home/docker/cp-test.txt ha-907658:/home/docker/cp-test_ha-907658-m02_ha-907658.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658 "sudo cat /home/docker/cp-test_ha-907658-m02_ha-907658.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 cp ha-907658-m02:/home/docker/cp-test.txt ha-907658-m03:/home/docker/cp-test_ha-907658-m02_ha-907658-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m03 "sudo cat /home/docker/cp-test_ha-907658-m02_ha-907658-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 cp ha-907658-m02:/home/docker/cp-test.txt ha-907658-m04:/home/docker/cp-test_ha-907658-m02_ha-907658-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m04 "sudo cat /home/docker/cp-test_ha-907658-m02_ha-907658-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 cp testdata/cp-test.txt ha-907658-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 cp ha-907658-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2786965912/001/cp-test_ha-907658-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 cp ha-907658-m03:/home/docker/cp-test.txt ha-907658:/home/docker/cp-test_ha-907658-m03_ha-907658.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658 "sudo cat /home/docker/cp-test_ha-907658-m03_ha-907658.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 cp ha-907658-m03:/home/docker/cp-test.txt ha-907658-m02:/home/docker/cp-test_ha-907658-m03_ha-907658-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m02 "sudo cat /home/docker/cp-test_ha-907658-m03_ha-907658-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 cp ha-907658-m03:/home/docker/cp-test.txt ha-907658-m04:/home/docker/cp-test_ha-907658-m03_ha-907658-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m04 "sudo cat /home/docker/cp-test_ha-907658-m03_ha-907658-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 cp testdata/cp-test.txt ha-907658-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 cp ha-907658-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2786965912/001/cp-test_ha-907658-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 cp ha-907658-m04:/home/docker/cp-test.txt ha-907658:/home/docker/cp-test_ha-907658-m04_ha-907658.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m04 "sudo cat /home/docker/cp-test.txt"
E1207 23:08:59.394770  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658 "sudo cat /home/docker/cp-test_ha-907658-m04_ha-907658.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 cp ha-907658-m04:/home/docker/cp-test.txt ha-907658-m02:/home/docker/cp-test_ha-907658-m04_ha-907658-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m02 "sudo cat /home/docker/cp-test_ha-907658-m04_ha-907658-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 cp ha-907658-m04:/home/docker/cp-test.txt ha-907658-m03:/home/docker/cp-test_ha-907658-m04_ha-907658-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 ssh -n ha-907658-m03 "sudo cat /home/docker/cp-test_ha-907658-m04_ha-907658-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (17.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-907658 node stop m02 --alsologtostderr -v 5: (16.376771383s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-907658 status --alsologtostderr -v 5: exit status 7 (702.55663ms)

                                                
                                                
-- stdout --
	ha-907658
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-907658-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-907658-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-907658-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:09:18.423349  472982 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:09:18.423470  472982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:09:18.423482  472982 out.go:374] Setting ErrFile to fd 2...
	I1207 23:09:18.423487  472982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:09:18.423737  472982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:09:18.423941  472982 out.go:368] Setting JSON to false
	I1207 23:09:18.423972  472982 mustload.go:66] Loading cluster: ha-907658
	I1207 23:09:18.424035  472982 notify.go:221] Checking for updates...
	I1207 23:09:18.424422  472982 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:09:18.424440  472982 status.go:174] checking status of ha-907658 ...
	I1207 23:09:18.425010  472982 cli_runner.go:164] Run: docker container inspect ha-907658 --format={{.State.Status}}
	I1207 23:09:18.445275  472982 status.go:371] ha-907658 host status = "Running" (err=<nil>)
	I1207 23:09:18.445305  472982 host.go:66] Checking if "ha-907658" exists ...
	I1207 23:09:18.445626  472982 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658
	I1207 23:09:18.464433  472982 host.go:66] Checking if "ha-907658" exists ...
	I1207 23:09:18.464814  472982 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:09:18.464882  472982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658
	I1207 23:09:18.483180  472982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658/id_rsa Username:docker}
	I1207 23:09:18.576977  472982 ssh_runner.go:195] Run: systemctl --version
	I1207 23:09:18.583759  472982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:09:18.596498  472982 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:09:18.654411  472982 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-07 23:09:18.644394892 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:09:18.654985  472982 kubeconfig.go:125] found "ha-907658" server: "https://192.168.49.254:8443"
	I1207 23:09:18.655015  472982 api_server.go:166] Checking apiserver status ...
	I1207 23:09:18.655056  472982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:09:18.666771  472982 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1242/cgroup
	W1207 23:09:18.676432  472982 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1242/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:09:18.676486  472982 ssh_runner.go:195] Run: ls
	I1207 23:09:18.680813  472982 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1207 23:09:18.685858  472982 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1207 23:09:18.685889  472982 status.go:463] ha-907658 apiserver status = Running (err=<nil>)
	I1207 23:09:18.685900  472982 status.go:176] ha-907658 status: &{Name:ha-907658 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:09:18.685921  472982 status.go:174] checking status of ha-907658-m02 ...
	I1207 23:09:18.686246  472982 cli_runner.go:164] Run: docker container inspect ha-907658-m02 --format={{.State.Status}}
	I1207 23:09:18.706046  472982 status.go:371] ha-907658-m02 host status = "Stopped" (err=<nil>)
	I1207 23:09:18.706076  472982 status.go:384] host is not running, skipping remaining checks
	I1207 23:09:18.706084  472982 status.go:176] ha-907658-m02 status: &{Name:ha-907658-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:09:18.706109  472982 status.go:174] checking status of ha-907658-m03 ...
	I1207 23:09:18.706393  472982 cli_runner.go:164] Run: docker container inspect ha-907658-m03 --format={{.State.Status}}
	I1207 23:09:18.725016  472982 status.go:371] ha-907658-m03 host status = "Running" (err=<nil>)
	I1207 23:09:18.725040  472982 host.go:66] Checking if "ha-907658-m03" exists ...
	I1207 23:09:18.725350  472982 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m03
	I1207 23:09:18.746164  472982 host.go:66] Checking if "ha-907658-m03" exists ...
	I1207 23:09:18.746529  472982 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:09:18.746595  472982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m03
	I1207 23:09:18.765248  472982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m03/id_rsa Username:docker}
	I1207 23:09:18.857050  472982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:09:18.869907  472982 kubeconfig.go:125] found "ha-907658" server: "https://192.168.49.254:8443"
	I1207 23:09:18.869936  472982 api_server.go:166] Checking apiserver status ...
	I1207 23:09:18.869966  472982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:09:18.880979  472982 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup
	W1207 23:09:18.889394  472982 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1177/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:09:18.889469  472982 ssh_runner.go:195] Run: ls
	I1207 23:09:18.893516  472982 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1207 23:09:18.897710  472982 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1207 23:09:18.897747  472982 status.go:463] ha-907658-m03 apiserver status = Running (err=<nil>)
	I1207 23:09:18.897758  472982 status.go:176] ha-907658-m03 status: &{Name:ha-907658-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:09:18.897777  472982 status.go:174] checking status of ha-907658-m04 ...
	I1207 23:09:18.898095  472982 cli_runner.go:164] Run: docker container inspect ha-907658-m04 --format={{.State.Status}}
	I1207 23:09:18.916073  472982 status.go:371] ha-907658-m04 host status = "Running" (err=<nil>)
	I1207 23:09:18.916096  472982 host.go:66] Checking if "ha-907658-m04" exists ...
	I1207 23:09:18.916361  472982 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-907658-m04
	I1207 23:09:18.934340  472982 host.go:66] Checking if "ha-907658-m04" exists ...
	I1207 23:09:18.934609  472982 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:09:18.934649  472982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-907658-m04
	I1207 23:09:18.953881  472982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/ha-907658-m04/id_rsa Username:docker}
	I1207 23:09:19.045991  472982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:09:19.059267  472982 status.go:176] ha-907658-m04 status: &{Name:ha-907658-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (17.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-907658 node start m02 --alsologtostderr -v 5: (13.879527851s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 stop --alsologtostderr -v 5
E1207 23:09:40.356263  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-907658 stop --alsologtostderr -v 5: (41.01908013s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 start --wait true --alsologtostderr -v 5
E1207 23:10:38.533912  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:10:38.540542  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:10:38.551971  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:10:38.573438  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:10:38.614865  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:10:38.696287  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:10:38.857885  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:10:39.179560  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:10:39.821621  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:10:41.103512  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:10:43.665090  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:10:48.786609  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:10:59.028601  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:11:02.277865  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-907658 start --wait true --alsologtostderr -v 5: (57.591233077s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 node delete m03 --alsologtostderr -v 5
E1207 23:11:19.510048  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-907658 node delete m03 --alsologtostderr -v 5: (6.753081891s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (30.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 stop --alsologtostderr -v 5
E1207 23:11:52.261718  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-907658 stop --alsologtostderr -v 5: (30.020562406s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-907658 status --alsologtostderr -v 5: exit status 7 (120.389527ms)

                                                
                                                
-- stdout --
	ha-907658
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-907658-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-907658-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:11:52.602106  487042 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:11:52.602548  487042 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:11:52.602559  487042 out.go:374] Setting ErrFile to fd 2...
	I1207 23:11:52.602563  487042 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:11:52.602784  487042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:11:52.602939  487042 out.go:368] Setting JSON to false
	I1207 23:11:52.602963  487042 mustload.go:66] Loading cluster: ha-907658
	I1207 23:11:52.603075  487042 notify.go:221] Checking for updates...
	I1207 23:11:52.603387  487042 config.go:182] Loaded profile config "ha-907658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:11:52.603410  487042 status.go:174] checking status of ha-907658 ...
	I1207 23:11:52.603912  487042 cli_runner.go:164] Run: docker container inspect ha-907658 --format={{.State.Status}}
	I1207 23:11:52.624024  487042 status.go:371] ha-907658 host status = "Stopped" (err=<nil>)
	I1207 23:11:52.624055  487042 status.go:384] host is not running, skipping remaining checks
	I1207 23:11:52.624064  487042 status.go:176] ha-907658 status: &{Name:ha-907658 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:11:52.624095  487042 status.go:174] checking status of ha-907658-m02 ...
	I1207 23:11:52.624426  487042 cli_runner.go:164] Run: docker container inspect ha-907658-m02 --format={{.State.Status}}
	I1207 23:11:52.642154  487042 status.go:371] ha-907658-m02 host status = "Stopped" (err=<nil>)
	I1207 23:11:52.642175  487042 status.go:384] host is not running, skipping remaining checks
	I1207 23:11:52.642184  487042 status.go:176] ha-907658-m02 status: &{Name:ha-907658-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:11:52.642208  487042 status.go:174] checking status of ha-907658-m04 ...
	I1207 23:11:52.642484  487042 cli_runner.go:164] Run: docker container inspect ha-907658-m04 --format={{.State.Status}}
	I1207 23:11:52.660675  487042 status.go:371] ha-907658-m04 host status = "Stopped" (err=<nil>)
	I1207 23:11:52.660729  487042 status.go:384] host is not running, skipping remaining checks
	I1207 23:11:52.660740  487042 status.go:176] ha-907658-m04 status: &{Name:ha-907658-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (30.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (58.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 node add --control-plane --alsologtostderr -v 5
E1207 23:16:52.261975  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-907658 node add --control-plane --alsologtostderr -v 5: (57.8314934s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-907658 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (58.72s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.49s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-065588 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-065588 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (40.493073924s)
--- PASS: TestJSONOutput/start/Command (40.49s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-065588 --output=json --user=testUser
E1207 23:18:15.337515  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:18:18.419981  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-065588 --output=json --user=testUser: (8.003998004s)
--- PASS: TestJSONOutput/stop/Command (8.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-511164 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-511164 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (81.469072ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f75efa8c-569d-4fb1-9986-5580c1f0f260","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-511164] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e6c431ca-febf-44a7-aba3-1b9ca0f02260","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22054"}}
	{"specversion":"1.0","id":"8a300da7-9209-4cfe-bde1-61c3219c730d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"eddf1ad0-7c84-43ff-b39b-a297117f03a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig"}}
	{"specversion":"1.0","id":"3905568a-e276-45ae-ace9-0b50eb44df5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube"}}
	{"specversion":"1.0","id":"bfa35c7c-0f4a-4f8c-b74e-1b9e0ab17fc4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"30396bb8-61af-4e28-ac89-63e763318d33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a0a28b73-af46-4c97-abda-1958ebb1d5e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-511164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-511164
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.87s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-062475 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-062475 --network=: (29.690042014s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-062475" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-062475
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-062475: (2.161233671s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.87s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.06s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-896435 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-896435 --network=bridge: (22.005867136s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-896435" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-896435
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-896435: (2.030658706s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.06s)

                                                
                                    
x
+
TestKicExistingNetwork (22.98s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1207 23:19:22.876193  393125 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1207 23:19:22.893482  393125 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1207 23:19:22.893566  393125 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1207 23:19:22.893596  393125 cli_runner.go:164] Run: docker network inspect existing-network
W1207 23:19:22.912343  393125 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1207 23:19:22.912384  393125 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1207 23:19:22.912403  393125 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1207 23:19:22.912538  393125 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1207 23:19:22.932148  393125 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-918c8f4f6e86 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:f0:02:fe:94:4b} reservation:<nil>}
I1207 23:19:22.932572  393125 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0051fb8f0}
I1207 23:19:22.932641  393125 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1207 23:19:22.932694  393125 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1207 23:19:22.981424  393125 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-467591 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-467591 --network=existing-network: (20.77655604s)
helpers_test.go:175: Cleaning up "existing-network-467591" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-467591
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-467591: (2.059878715s)
I1207 23:19:45.837096  393125 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (22.98s)

                                                
                                    
x
+
TestKicCustomSubnet (22.81s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-547991 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-547991 --subnet=192.168.60.0/24: (20.61394469s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-547991 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-547991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-547991
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-547991: (2.176625715s)
--- PASS: TestKicCustomSubnet (22.81s)

                                                
                                    
x
+
TestKicStaticIP (24.55s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-211273 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-211273 --static-ip=192.168.200.200: (22.184857774s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-211273 ip
helpers_test.go:175: Cleaning up "static-ip-211273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-211273
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-211273: (2.201992833s)
--- PASS: TestKicStaticIP (24.55s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (51.96s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-631165 --driver=docker  --container-runtime=crio
E1207 23:20:38.533827  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-631165 --driver=docker  --container-runtime=crio: (22.172984413s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-633209 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-633209 --driver=docker  --container-runtime=crio: (23.809223319s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-631165
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-633209
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-633209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-633209
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-633209: (2.373237119s)
helpers_test.go:175: Cleaning up "first-631165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-631165
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-631165: (2.36291011s)
--- PASS: TestMinikubeProfile (51.96s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-989982 --memory=3072 --mount-string /tmp/TestMountStartserial2572760249/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-989982 --memory=3072 --mount-string /tmp/TestMountStartserial2572760249/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.950447036s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-989982 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-004496 --memory=3072 --mount-string /tmp/TestMountStartserial2572760249/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-004496 --memory=3072 --mount-string /tmp/TestMountStartserial2572760249/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.726819774s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-004496 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-989982 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-989982 --alsologtostderr -v=5: (1.701711871s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-004496 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-004496
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-004496: (1.271303887s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.1s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-004496
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-004496: (7.103674093s)
--- PASS: TestMountStart/serial/RestartStopped (8.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-004496 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (62.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-538075 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1207 23:21:52.262023  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-538075 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m2.102881447s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (62.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-538075 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-538075 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-538075 -- rollout status deployment/busybox: (3.051380635s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-538075 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-538075 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-538075 -- exec busybox-7b57f96db7-9xczl -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-538075 -- exec busybox-7b57f96db7-nc47b -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-538075 -- exec busybox-7b57f96db7-9xczl -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-538075 -- exec busybox-7b57f96db7-nc47b -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-538075 -- exec busybox-7b57f96db7-9xczl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-538075 -- exec busybox-7b57f96db7-nc47b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.50s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-538075 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-538075 -- exec busybox-7b57f96db7-9xczl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-538075 -- exec busybox-7b57f96db7-9xczl -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-538075 -- exec busybox-7b57f96db7-nc47b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-538075 -- exec busybox-7b57f96db7-nc47b -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (56.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-538075 -v=5 --alsologtostderr
E1207 23:23:18.418575  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-538075 -v=5 --alsologtostderr: (55.55897387s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (56.23s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-538075 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 cp testdata/cp-test.txt multinode-538075:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 ssh -n multinode-538075 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 cp multinode-538075:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2476845653/001/cp-test_multinode-538075.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 ssh -n multinode-538075 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 cp multinode-538075:/home/docker/cp-test.txt multinode-538075-m02:/home/docker/cp-test_multinode-538075_multinode-538075-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 ssh -n multinode-538075 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 ssh -n multinode-538075-m02 "sudo cat /home/docker/cp-test_multinode-538075_multinode-538075-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 cp multinode-538075:/home/docker/cp-test.txt multinode-538075-m03:/home/docker/cp-test_multinode-538075_multinode-538075-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 ssh -n multinode-538075 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 ssh -n multinode-538075-m03 "sudo cat /home/docker/cp-test_multinode-538075_multinode-538075-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 cp testdata/cp-test.txt multinode-538075-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 ssh -n multinode-538075-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 cp multinode-538075-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2476845653/001/cp-test_multinode-538075-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 ssh -n multinode-538075-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 cp multinode-538075-m02:/home/docker/cp-test.txt multinode-538075:/home/docker/cp-test_multinode-538075-m02_multinode-538075.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 ssh -n multinode-538075-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 ssh -n multinode-538075 "sudo cat /home/docker/cp-test_multinode-538075-m02_multinode-538075.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 cp multinode-538075-m02:/home/docker/cp-test.txt multinode-538075-m03:/home/docker/cp-test_multinode-538075-m02_multinode-538075-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 ssh -n multinode-538075-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 ssh -n multinode-538075-m03 "sudo cat /home/docker/cp-test_multinode-538075-m02_multinode-538075-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 cp testdata/cp-test.txt multinode-538075-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 ssh -n multinode-538075-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 cp multinode-538075-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2476845653/001/cp-test_multinode-538075-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 ssh -n multinode-538075-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 cp multinode-538075-m03:/home/docker/cp-test.txt multinode-538075:/home/docker/cp-test_multinode-538075-m03_multinode-538075.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 ssh -n multinode-538075-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 ssh -n multinode-538075 "sudo cat /home/docker/cp-test_multinode-538075-m03_multinode-538075.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 cp multinode-538075-m03:/home/docker/cp-test.txt multinode-538075-m02:/home/docker/cp-test_multinode-538075-m03_multinode-538075-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 ssh -n multinode-538075-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 ssh -n multinode-538075-m02 "sudo cat /home/docker/cp-test_multinode-538075-m03_multinode-538075-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-538075 node stop m03: (1.283134571s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-538075 status: exit status 7 (496.338696ms)

                                                
                                                
-- stdout --
	multinode-538075
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-538075-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-538075-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-538075 status --alsologtostderr: exit status 7 (504.564035ms)

                                                
                                                
-- stdout --
	multinode-538075
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-538075-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-538075-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:24:08.710491  550031 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:24:08.710627  550031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:24:08.710639  550031 out.go:374] Setting ErrFile to fd 2...
	I1207 23:24:08.710645  550031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:24:08.710866  550031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:24:08.711038  550031 out.go:368] Setting JSON to false
	I1207 23:24:08.711064  550031 mustload.go:66] Loading cluster: multinode-538075
	I1207 23:24:08.711200  550031 notify.go:221] Checking for updates...
	I1207 23:24:08.711449  550031 config.go:182] Loaded profile config "multinode-538075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:24:08.711463  550031 status.go:174] checking status of multinode-538075 ...
	I1207 23:24:08.711943  550031 cli_runner.go:164] Run: docker container inspect multinode-538075 --format={{.State.Status}}
	I1207 23:24:08.734296  550031 status.go:371] multinode-538075 host status = "Running" (err=<nil>)
	I1207 23:24:08.734337  550031 host.go:66] Checking if "multinode-538075" exists ...
	I1207 23:24:08.734599  550031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-538075
	I1207 23:24:08.752695  550031 host.go:66] Checking if "multinode-538075" exists ...
	I1207 23:24:08.752944  550031 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:24:08.752981  550031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-538075
	I1207 23:24:08.770743  550031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33288 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/multinode-538075/id_rsa Username:docker}
	I1207 23:24:08.862828  550031 ssh_runner.go:195] Run: systemctl --version
	I1207 23:24:08.869392  550031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:24:08.882452  550031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:24:08.941314  550031 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-07 23:24:08.931251118 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:24:08.941921  550031 kubeconfig.go:125] found "multinode-538075" server: "https://192.168.67.2:8443"
	I1207 23:24:08.941955  550031 api_server.go:166] Checking apiserver status ...
	I1207 23:24:08.942001  550031 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 23:24:08.954069  550031 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1253/cgroup
	W1207 23:24:08.962701  550031 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1253/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1207 23:24:08.962759  550031 ssh_runner.go:195] Run: ls
	I1207 23:24:08.966807  550031 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 23:24:08.972153  550031 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1207 23:24:08.972180  550031 status.go:463] multinode-538075 apiserver status = Running (err=<nil>)
	I1207 23:24:08.972190  550031 status.go:176] multinode-538075 status: &{Name:multinode-538075 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:24:08.972220  550031 status.go:174] checking status of multinode-538075-m02 ...
	I1207 23:24:08.972510  550031 cli_runner.go:164] Run: docker container inspect multinode-538075-m02 --format={{.State.Status}}
	I1207 23:24:08.991247  550031 status.go:371] multinode-538075-m02 host status = "Running" (err=<nil>)
	I1207 23:24:08.991272  550031 host.go:66] Checking if "multinode-538075-m02" exists ...
	I1207 23:24:08.991564  550031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-538075-m02
	I1207 23:24:09.009957  550031 host.go:66] Checking if "multinode-538075-m02" exists ...
	I1207 23:24:09.010230  550031 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 23:24:09.010271  550031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-538075-m02
	I1207 23:24:09.028606  550031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33293 SSHKeyPath:/home/jenkins/minikube-integration/22054-389542/.minikube/machines/multinode-538075-m02/id_rsa Username:docker}
	I1207 23:24:09.119836  550031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 23:24:09.132434  550031 status.go:176] multinode-538075-m02 status: &{Name:multinode-538075-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:24:09.132510  550031 status.go:174] checking status of multinode-538075-m03 ...
	I1207 23:24:09.132786  550031 cli_runner.go:164] Run: docker container inspect multinode-538075-m03 --format={{.State.Status}}
	I1207 23:24:09.151143  550031 status.go:371] multinode-538075-m03 host status = "Stopped" (err=<nil>)
	I1207 23:24:09.151198  550031 status.go:384] host is not running, skipping remaining checks
	I1207 23:24:09.151214  550031 status.go:176] multinode-538075-m03 status: &{Name:multinode-538075-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-538075 node start m03 -v=5 --alsologtostderr: (6.51733071s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (77.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-538075
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-538075
E1207 23:24:41.484746  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-538075: (31.459544513s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-538075 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-538075 --wait=true -v=5 --alsologtostderr: (45.586768519s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-538075
--- PASS: TestMultiNode/serial/RestartKeepsNodes (77.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-538075 node delete m03: (4.693333622s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 status --alsologtostderr
E1207 23:25:38.534616  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-538075 stop: (28.3645746s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-538075 status: exit status 7 (100.506784ms)

                                                
                                                
-- stdout --
	multinode-538075
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-538075-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-538075 status --alsologtostderr: exit status 7 (103.736636ms)

                                                
                                                
-- stdout --
	multinode-538075
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-538075-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:26:07.379520  559809 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:26:07.379645  559809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:26:07.379653  559809 out.go:374] Setting ErrFile to fd 2...
	I1207 23:26:07.379658  559809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:26:07.379875  559809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:26:07.380026  559809 out.go:368] Setting JSON to false
	I1207 23:26:07.380050  559809 mustload.go:66] Loading cluster: multinode-538075
	I1207 23:26:07.380211  559809 notify.go:221] Checking for updates...
	I1207 23:26:07.380416  559809 config.go:182] Loaded profile config "multinode-538075": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:26:07.380449  559809 status.go:174] checking status of multinode-538075 ...
	I1207 23:26:07.380957  559809 cli_runner.go:164] Run: docker container inspect multinode-538075 --format={{.State.Status}}
	I1207 23:26:07.402199  559809 status.go:371] multinode-538075 host status = "Stopped" (err=<nil>)
	I1207 23:26:07.402219  559809 status.go:384] host is not running, skipping remaining checks
	I1207 23:26:07.402226  559809 status.go:176] multinode-538075 status: &{Name:multinode-538075 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 23:26:07.402247  559809 status.go:174] checking status of multinode-538075-m02 ...
	I1207 23:26:07.402532  559809 cli_runner.go:164] Run: docker container inspect multinode-538075-m02 --format={{.State.Status}}
	I1207 23:26:07.421926  559809 status.go:371] multinode-538075-m02 host status = "Stopped" (err=<nil>)
	I1207 23:26:07.421956  559809 status.go:384] host is not running, skipping remaining checks
	I1207 23:26:07.421963  559809 status.go:176] multinode-538075-m02 status: &{Name:multinode-538075-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (44.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-538075 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-538075 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (43.960513717s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-538075 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (44.57s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-538075
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-538075-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-538075-m02 --driver=docker  --container-runtime=crio: exit status 14 (80.070152ms)

                                                
                                                
-- stdout --
	* [multinode-538075-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-538075-m02' is duplicated with machine name 'multinode-538075-m02' in profile 'multinode-538075'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-538075-m03 --driver=docker  --container-runtime=crio
E1207 23:26:52.262111  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1207 23:27:01.598568  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-458242/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-538075-m03 --driver=docker  --container-runtime=crio: (19.92903497s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-538075
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-538075: exit status 80 (299.360159ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-538075 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-538075-m03 already exists in multinode-538075-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-538075-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-538075-m03: (2.390606999s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.76s)

                                                
                                    
x
+
TestScheduledStopUnix (99.94s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-899153 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-899153 --memory=3072 --driver=docker  --container-runtime=crio: (23.638208922s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-899153 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1207 23:27:42.692454  569690 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:27:42.692707  569690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:27:42.692715  569690 out.go:374] Setting ErrFile to fd 2...
	I1207 23:27:42.692719  569690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:27:42.692913  569690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:27:42.693160  569690 out.go:368] Setting JSON to false
	I1207 23:27:42.693247  569690 mustload.go:66] Loading cluster: scheduled-stop-899153
	I1207 23:27:42.693567  569690 config.go:182] Loaded profile config "scheduled-stop-899153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:27:42.693636  569690 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/scheduled-stop-899153/config.json ...
	I1207 23:27:42.693821  569690 mustload.go:66] Loading cluster: scheduled-stop-899153
	I1207 23:27:42.693930  569690 config.go:182] Loaded profile config "scheduled-stop-899153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-899153 -n scheduled-stop-899153
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-899153 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1207 23:27:43.080184  569837 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:27:43.080503  569837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:27:43.080515  569837 out.go:374] Setting ErrFile to fd 2...
	I1207 23:27:43.080521  569837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:27:43.080749  569837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:27:43.081047  569837 out.go:368] Setting JSON to false
	I1207 23:27:43.081286  569837 daemonize_unix.go:73] killing process 569725 as it is an old scheduled stop
	I1207 23:27:43.081419  569837 mustload.go:66] Loading cluster: scheduled-stop-899153
	I1207 23:27:43.081892  569837 config.go:182] Loaded profile config "scheduled-stop-899153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:27:43.081973  569837 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/scheduled-stop-899153/config.json ...
	I1207 23:27:43.082190  569837 mustload.go:66] Loading cluster: scheduled-stop-899153
	I1207 23:27:43.082314  569837 config.go:182] Loaded profile config "scheduled-stop-899153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1207 23:27:43.087293  393125 retry.go:31] will retry after 93.443µs: open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/scheduled-stop-899153/pid: no such file or directory
I1207 23:27:43.088461  393125 retry.go:31] will retry after 162.457µs: open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/scheduled-stop-899153/pid: no such file or directory
I1207 23:27:43.089597  393125 retry.go:31] will retry after 232.51µs: open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/scheduled-stop-899153/pid: no such file or directory
I1207 23:27:43.090767  393125 retry.go:31] will retry after 399.565µs: open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/scheduled-stop-899153/pid: no such file or directory
I1207 23:27:43.091894  393125 retry.go:31] will retry after 436.891µs: open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/scheduled-stop-899153/pid: no such file or directory
I1207 23:27:43.093029  393125 retry.go:31] will retry after 892.144µs: open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/scheduled-stop-899153/pid: no such file or directory
I1207 23:27:43.094148  393125 retry.go:31] will retry after 1.690937ms: open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/scheduled-stop-899153/pid: no such file or directory
I1207 23:27:43.096388  393125 retry.go:31] will retry after 2.26909ms: open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/scheduled-stop-899153/pid: no such file or directory
I1207 23:27:43.099604  393125 retry.go:31] will retry after 2.45595ms: open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/scheduled-stop-899153/pid: no such file or directory
I1207 23:27:43.102855  393125 retry.go:31] will retry after 2.191369ms: open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/scheduled-stop-899153/pid: no such file or directory
I1207 23:27:43.106124  393125 retry.go:31] will retry after 4.66307ms: open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/scheduled-stop-899153/pid: no such file or directory
I1207 23:27:43.111353  393125 retry.go:31] will retry after 5.017489ms: open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/scheduled-stop-899153/pid: no such file or directory
I1207 23:27:43.116584  393125 retry.go:31] will retry after 16.77029ms: open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/scheduled-stop-899153/pid: no such file or directory
I1207 23:27:43.133824  393125 retry.go:31] will retry after 10.879788ms: open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/scheduled-stop-899153/pid: no such file or directory
I1207 23:27:43.145087  393125 retry.go:31] will retry after 27.073026ms: open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/scheduled-stop-899153/pid: no such file or directory
I1207 23:27:43.172276  393125 retry.go:31] will retry after 32.558929ms: open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/scheduled-stop-899153/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-899153 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-899153 -n scheduled-stop-899153
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-899153
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-899153 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1207 23:28:08.977041  570484 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:28:08.977177  570484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:28:08.977188  570484 out.go:374] Setting ErrFile to fd 2...
	I1207 23:28:08.977193  570484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:28:08.977454  570484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:28:08.977705  570484 out.go:368] Setting JSON to false
	I1207 23:28:08.977788  570484 mustload.go:66] Loading cluster: scheduled-stop-899153
	I1207 23:28:08.978119  570484 config.go:182] Loaded profile config "scheduled-stop-899153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:28:08.978189  570484 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/scheduled-stop-899153/config.json ...
	I1207 23:28:08.978392  570484 mustload.go:66] Loading cluster: scheduled-stop-899153
	I1207 23:28:08.978498  570484 config.go:182] Loaded profile config "scheduled-stop-899153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
E1207 23:28:18.422841  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-899153
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-899153: exit status 7 (87.532429ms)

                                                
                                                
-- stdout --
	scheduled-stop-899153
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-899153 -n scheduled-stop-899153
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-899153 -n scheduled-stop-899153: exit status 7 (84.926352ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-899153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-899153
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-899153: (4.78296067s)
--- PASS: TestScheduledStopUnix (99.94s)

                                                
                                    
x
+
TestInsufficientStorage (9.07s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-517111 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-517111 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.565243979s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4f6c1d4f-af5f-43e0-8c31-7f943dc1fd60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-517111] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"30fcaeef-73fa-4e9a-8025-21546e44c71c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22054"}}
	{"specversion":"1.0","id":"d5570567-6060-42b0-91a4-cd627953f612","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a32b38b0-47c3-4b99-805b-1c066e69578d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig"}}
	{"specversion":"1.0","id":"479fc9f1-184f-4c22-8ae4-901fe9506bc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube"}}
	{"specversion":"1.0","id":"a8f7fdcf-9dd2-40f5-bb5e-6651225b5c99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ed94d162-9747-423e-9e80-8dfd49283d4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"893e37ca-a680-43b4-a89b-87cdc8a61dc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b43f0dfc-cab2-48d9-850e-3b36ad672f08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"fb2767b9-bb36-4319-abb2-7ff479929f33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f5a6b4a8-c201-452d-a165-21cdda024446","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"44292695-b3bd-4d43-af7b-3ab2524d1bc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-517111\" primary control-plane node in \"insufficient-storage-517111\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"49a68fef-a234-4e54-a52b-8b53b7134255","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764843390-22032 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"897d0e3d-4122-48dd-8486-cf56ce507ce2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"28b804e9-2804-418f-b8ec-d5d15fc6002c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-517111 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-517111 --output=json --layout=cluster: exit status 7 (297.081684ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-517111","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-517111","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 23:29:05.787804  573030 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-517111" does not appear in /home/jenkins/minikube-integration/22054-389542/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-517111 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-517111 --output=json --layout=cluster: exit status 7 (289.272786ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-517111","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-517111","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 23:29:06.077755  573139 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-517111" does not appear in /home/jenkins/minikube-integration/22054-389542/kubeconfig
	E1207 23:29:06.088186  573139 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/insufficient-storage-517111/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-517111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-517111
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-517111: (1.919094422s)
--- PASS: TestInsufficientStorage (9.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (50.15s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1025421696 start -p running-upgrade-991102 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1025421696 start -p running-upgrade-991102 --memory=3072 --vm-driver=docker  --container-runtime=crio: (23.392462349s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-991102 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-991102 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.381816554s)
helpers_test.go:175: Cleaning up "running-upgrade-991102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-991102
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-991102: (2.059040852s)
--- PASS: TestRunningBinaryUpgrade (50.15s)

                                                
                                    
x
+
TestKubernetesUpgrade (294.85s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-703538 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-703538 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.755938858s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-703538
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-703538: (2.090846476s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-703538 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-703538 status --format={{.Host}}: exit status 7 (95.780682ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-703538 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-703538 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m20.169310146s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-703538 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-703538 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-703538 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (96.300608ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-703538] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-703538
	    minikube start -p kubernetes-upgrade-703538 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7035382 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-703538 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-703538 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-703538 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.134560839s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-703538" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-703538
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-703538: (2.440880261s)
--- PASS: TestKubernetesUpgrade (294.85s)

                                                
                                    
x
+
TestMissingContainerUpgrade (80.33s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.241869669 start -p missing-upgrade-776369 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.241869669 start -p missing-upgrade-776369 --memory=3072 --driver=docker  --container-runtime=crio: (21.95998689s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-776369
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-776369: (10.476425577s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-776369
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-776369 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-776369 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.696823348s)
helpers_test.go:175: Cleaning up "missing-upgrade-776369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-776369
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-776369: (2.548977185s)
--- PASS: TestMissingContainerUpgrade (80.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.30s)

                                                
                                    
x
+
TestPause/serial/Start (56.42s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-567110 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-567110 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (56.417814849s)
--- PASS: TestPause/serial/Start (56.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (325.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.705412123 start -p stopped-upgrade-604160 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.705412123 start -p stopped-upgrade-604160 --memory=3072 --vm-driver=docker  --container-runtime=crio: (50.174055019s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.705412123 -p stopped-upgrade-604160 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.705412123 -p stopped-upgrade-604160 stop: (14.114329983s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-604160 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-604160 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m21.125611896s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (325.41s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.32s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-567110 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-567110 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.304125004s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-689231 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-689231 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (83.026218ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-689231] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (19.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-689231 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-689231 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (19.614554026s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-689231 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (19.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-689231 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-689231 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (20.70793134s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-689231 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-689231 status -o json: exit status 2 (310.412267ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-689231","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-689231
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-689231: (2.013498927s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (3.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-689231 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-689231 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (3.978329809s)
--- PASS: TestNoKubernetes/serial/Start (3.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22054-389542/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-689231 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-689231 "sudo systemctl is-active --quiet service kubelet": exit status 1 (278.827305ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (15.375322391s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (16.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-689231
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-689231: (1.292247133s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-689231 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-689231 --driver=docker  --container-runtime=crio: (6.971735234s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-689231 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-689231 "sudo systemctl is-active --quiet service kubelet": exit status 1 (280.430683ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-600852 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-600852 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (174.554341ms)

                                                
                                                
-- stdout --
	* [false-600852] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 23:33:10.877584  629131 out.go:360] Setting OutFile to fd 1 ...
	I1207 23:33:10.877887  629131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:33:10.877900  629131 out.go:374] Setting ErrFile to fd 2...
	I1207 23:33:10.877906  629131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1207 23:33:10.878118  629131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-389542/.minikube/bin
	I1207 23:33:10.878635  629131 out.go:368] Setting JSON to false
	I1207 23:33:10.879766  629131 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8135,"bootTime":1765142256,"procs":283,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1207 23:33:10.879838  629131 start.go:143] virtualization: kvm guest
	I1207 23:33:10.881959  629131 out.go:179] * [false-600852] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1207 23:33:10.883121  629131 out.go:179]   - MINIKUBE_LOCATION=22054
	I1207 23:33:10.883179  629131 notify.go:221] Checking for updates...
	I1207 23:33:10.885346  629131 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 23:33:10.886473  629131 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-389542/kubeconfig
	I1207 23:33:10.887466  629131 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-389542/.minikube
	I1207 23:33:10.888502  629131 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1207 23:33:10.889622  629131 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 23:33:10.891378  629131 config.go:182] Loaded profile config "cert-expiration-612608": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1207 23:33:10.891541  629131 config.go:182] Loaded profile config "kubernetes-upgrade-703538": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1207 23:33:10.891676  629131 config.go:182] Loaded profile config "stopped-upgrade-604160": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1207 23:33:10.891810  629131 driver.go:422] Setting default libvirt URI to qemu:///system
	I1207 23:33:10.917226  629131 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1207 23:33:10.917396  629131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 23:33:10.977780  629131 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-07 23:33:10.966802179 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1207 23:33:10.977884  629131 docker.go:319] overlay module found
	I1207 23:33:10.979537  629131 out.go:179] * Using the docker driver based on user configuration
	I1207 23:33:10.980586  629131 start.go:309] selected driver: docker
	I1207 23:33:10.980601  629131 start.go:927] validating driver "docker" against <nil>
	I1207 23:33:10.980613  629131 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 23:33:10.982170  629131 out.go:203] 
	W1207 23:33:10.983179  629131 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1207 23:33:10.984539  629131 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-600852 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-600852

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-600852

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-600852

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-600852

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-600852

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-600852

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-600852

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-600852

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-600852

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-600852

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-600852

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-600852" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-600852" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:30:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-612608
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:31:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-703538
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:30:23 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-604160
contexts:
- context:
cluster: cert-expiration-612608
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:30:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-612608
name: cert-expiration-612608
- context:
cluster: kubernetes-upgrade-703538
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:31:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-703538
name: kubernetes-upgrade-703538
- context:
cluster: stopped-upgrade-604160
user: stopped-upgrade-604160
name: stopped-upgrade-604160
current-context: ""
kind: Config
users:
- name: cert-expiration-612608
user:
client-certificate: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/cert-expiration-612608/client.crt
client-key: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/cert-expiration-612608/client.key
- name: kubernetes-upgrade-703538
user:
client-certificate: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/kubernetes-upgrade-703538/client.crt
client-key: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/kubernetes-upgrade-703538/client.key
- name: stopped-upgrade-604160
user:
client-certificate: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/stopped-upgrade-604160/client.crt
client-key: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/stopped-upgrade-604160/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-600852

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-600852"

                                                
                                                
----------------------- debugLogs end: false-600852 [took: 3.407133528s] --------------------------------
helpers_test.go:175: Cleaning up "false-600852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-600852
--- PASS: TestNetworkPlugins/group/false (3.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (51.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-320477 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-320477 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.199086387s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (51.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (45.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-313006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-313006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (45.574553584s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (45.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-320477 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2f39a4dc-d310-46a6-b18b-a82cecb43bdd] Pending
helpers_test.go:352: "busybox" [2f39a4dc-d310-46a6-b18b-a82cecb43bdd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2f39a4dc-d310-46a6-b18b-a82cecb43bdd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00299766s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-320477 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-320477 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-320477 --alsologtostderr -v=3: (16.099680283s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-604160
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-604160: (1.139013436s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-320477 -n old-k8s-version-320477
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-320477 -n old-k8s-version-320477: exit status 7 (91.386623ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-320477 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (49.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-320477 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-320477 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (48.690365489s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-320477 -n old-k8s-version-320477
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (49.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (68.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-654118 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-654118 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (1m8.627291548s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (68.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-313006 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9f794bb8-ad22-47d0-a7a7-e5068ff54805] Pending
helpers_test.go:352: "busybox" [9f794bb8-ad22-47d0-a7a7-e5068ff54805] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9f794bb8-ad22-47d0-a7a7-e5068ff54805] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004287231s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-313006 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-313006 --alsologtostderr -v=3
E1207 23:34:55.339133  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-313006 --alsologtostderr -v=3: (18.914403391s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-313006 -n no-preload-313006
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-313006 -n no-preload-313006: exit status 7 (90.970888ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-313006 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (46.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-313006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-313006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (46.484710908s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-313006 -n no-preload-313006
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (46.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-p5lgr" [990e3703-ccdc-419b-9739-4009d4eef45d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003169571s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-p5lgr" [990e3703-ccdc-419b-9739-4009d4eef45d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003424346s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-320477 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-320477 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-312944 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-312944 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (45.152014911s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-654118 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [64a194a3-ffb4-468c-a744-5215164f87c1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [64a194a3-ffb4-468c-a744-5215164f87c1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00415678s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-654118 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (21.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-858719 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-858719 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (21.504904353s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (21.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-zvhhr" [a984abc1-2a0f-441b-9ca7-e10d047bbd98] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002959579s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-654118 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-654118 --alsologtostderr -v=3: (16.806146831s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-zvhhr" [a984abc1-2a0f-441b-9ca7-e10d047bbd98] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003872074s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-313006 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-313006 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-654118 -n embed-certs-654118
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-654118 -n embed-certs-654118: exit status 7 (115.099728ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-654118 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (54.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-654118 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-654118 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (53.953052894s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-654118 -n embed-certs-654118
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (54.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (46.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-600852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-600852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (46.076663881s)
--- PASS: TestNetworkPlugins/group/auto/Start (46.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-858719 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-858719 --alsologtostderr -v=3: (8.19170955s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-858719 -n newest-cni-858719
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-858719 -n newest-cni-858719: exit status 7 (106.437165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-858719 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-858719 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-858719 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (12.014606318s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-858719 -n newest-cni-858719
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-312944 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [202a50b4-b2e7-4b74-a299-5f38dd0bd9c5] Pending
helpers_test.go:352: "busybox" [202a50b4-b2e7-4b74-a299-5f38dd0bd9c5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [202a50b4-b2e7-4b74-a299-5f38dd0bd9c5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003519603s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-312944 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-858719 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-312944 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-312944 --alsologtostderr -v=3: (18.202586514s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (68.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-600852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1207 23:36:52.261785  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/addons-746247/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-600852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m8.884675301s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (68.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-312944 -n default-k8s-diff-port-312944
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-312944 -n default-k8s-diff-port-312944: exit status 7 (85.191023ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-312944 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-312944 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-312944 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (48.349678554s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-312944 -n default-k8s-diff-port-312944
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-600852 "pgrep -a kubelet"
I1207 23:37:06.292176  393125 config.go:182] Loaded profile config "auto-600852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-600852 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-924ff" [eca17543-a584-45d7-9bba-cba750c2f658] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-924ff" [eca17543-a584-45d7-9bba-cba750c2f658] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004468208s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8dl4x" [e3184cde-d94b-4dfb-8ba8-a901a5d58d66] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003980024s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-600852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-600852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-600852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8dl4x" [e3184cde-d94b-4dfb-8ba8-a901a5d58d66] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00453693s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-654118 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-654118 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (51.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-600852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-600852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (51.488934751s)
--- PASS: TestNetworkPlugins/group/calico/Start (51.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (46.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-600852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-600852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (46.536081322s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (46.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-x7hx7" [8ab1a416-3cea-4d56-8a53-4645de22a61d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003413386s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-x7hx7" [8ab1a416-3cea-4d56-8a53-4645de22a61d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003125613s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-312944 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-vzkfg" [87c7cd14-d729-423a-a43f-bdb77eaeba04] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004880249s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-312944 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-600852 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-600852 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xkgbx" [ef1dac08-009d-467a-82af-f010cc8ed8ed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xkgbx" [ef1dac08-009d-467a-82af-f010cc8ed8ed] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004959255s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-600852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-600852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-600852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (72.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-600852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-600852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m12.497856495s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (72.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-600852 "pgrep -a kubelet"
I1207 23:38:22.279702  393125 config.go:182] Loaded profile config "custom-flannel-600852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-600852 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-d7gs5" [401ace4c-4659-4a5e-b5c1-33a41179fd07] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-d7gs5" [401ace4c-4659-4a5e-b5c1-33a41179fd07] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004477795s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-2h4fx" [e6bb28ef-fddb-49cc-815f-ef5d750e4450] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-2h4fx" [e6bb28ef-fddb-49cc-815f-ef5d750e4450] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003807434s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-600852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-600852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-600852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-600852 "pgrep -a kubelet"
I1207 23:38:33.151417  393125 config.go:182] Loaded profile config "calico-600852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-600852 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8hjqt" [0c9a0f82-9279-43ff-bc8d-50f51ee68bdb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8hjqt" [0c9a0f82-9279-43ff-bc8d-50f51ee68bdb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003476937s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (47.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-600852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-600852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (47.994807775s)
--- PASS: TestNetworkPlugins/group/flannel/Start (47.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-600852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-600852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-600852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (36.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-600852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-600852 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (36.532011856s)
--- PASS: TestNetworkPlugins/group/bridge/Start (36.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-tdhm5" [38b4cd29-c035-4809-83ba-cd1ebd43e038] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0039543s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-600852 "pgrep -a kubelet"
I1207 23:39:28.804703  393125 config.go:182] Loaded profile config "enable-default-cni-600852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-600852 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gglsb" [87c4e739-e856-4a40-bc7b-8c2f3db4741b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1207 23:39:30.308627  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/old-k8s-version-320477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-gglsb" [87c4e739-e856-4a40-bc7b-8c2f3db4741b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003042717s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-600852 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-600852 "pgrep -a kubelet"
I1207 23:39:31.420833  393125 config.go:182] Loaded profile config "bridge-600852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-600852 replace --force -f testdata/netcat-deployment.yaml
I1207 23:39:31.568666  393125 config.go:182] Loaded profile config "flannel-600852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ls64z" [f7e82dba-cd46-4282-a106-7649243b617f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ls64z" [f7e82dba-cd46-4282-a106-7649243b617f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003535285s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-600852 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t4jdv" [6e9b2d80-69ae-49c0-9f48-2e45536358de] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-t4jdv" [6e9b2d80-69ae-49c0-9f48-2e45536358de] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004534658s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-600852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-600852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-600852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-600852 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-600852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-600852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-600852 exec deployment/netcat -- nslookup kubernetes.default
E1207 23:39:41.775861  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/no-preload-313006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-600852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-600852 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    

Test skip (35/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
159 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
160 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
161 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
351 TestPreload 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
371 TestStartStop/group/disable-driver-mounts 0.2
387 TestNetworkPlugins/group/kubenet 3.62
395 TestNetworkPlugins/group/cilium 3.88
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:35: skipping TestPreload - user-pulled images not persisted across restarts with crio
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-837628" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-837628
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-600852 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-600852

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-600852

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-600852

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-600852

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-600852

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-600852

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-600852

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-600852

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-600852

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-600852

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-600852

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-600852" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-600852" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:30:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-612608
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:31:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-703538
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:30:23 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-604160
contexts:
- context:
cluster: cert-expiration-612608
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:30:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-612608
name: cert-expiration-612608
- context:
cluster: kubernetes-upgrade-703538
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:31:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-703538
name: kubernetes-upgrade-703538
- context:
cluster: stopped-upgrade-604160
user: stopped-upgrade-604160
name: stopped-upgrade-604160
current-context: ""
kind: Config
users:
- name: cert-expiration-612608
user:
client-certificate: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/cert-expiration-612608/client.crt
client-key: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/cert-expiration-612608/client.key
- name: kubernetes-upgrade-703538
user:
client-certificate: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/kubernetes-upgrade-703538/client.crt
client-key: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/kubernetes-upgrade-703538/client.key
- name: stopped-upgrade-604160
user:
client-certificate: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/stopped-upgrade-604160/client.crt
client-key: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/stopped-upgrade-604160/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-600852

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-600852"

                                                
                                                
----------------------- debugLogs end: kubenet-600852 [took: 3.43221765s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-600852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-600852
--- SKIP: TestNetworkPlugins/group/kubenet (3.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-600852 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-600852

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-600852

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-600852

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-600852

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-600852

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-600852

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-600852

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-600852

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-600852

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-600852

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-600852

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-600852" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-600852

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-600852

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-600852

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-600852

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-600852" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-600852" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:30:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-612608
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:31:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-703538
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-389542/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:30:23 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-604160
contexts:
- context:
cluster: cert-expiration-612608
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:30:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-612608
name: cert-expiration-612608
- context:
cluster: kubernetes-upgrade-703538
extensions:
- extension:
last-update: Sun, 07 Dec 2025 23:31:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-703538
name: kubernetes-upgrade-703538
- context:
cluster: stopped-upgrade-604160
user: stopped-upgrade-604160
name: stopped-upgrade-604160
current-context: ""
kind: Config
users:
- name: cert-expiration-612608
user:
client-certificate: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/cert-expiration-612608/client.crt
client-key: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/cert-expiration-612608/client.key
- name: kubernetes-upgrade-703538
user:
client-certificate: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/kubernetes-upgrade-703538/client.crt
client-key: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/kubernetes-upgrade-703538/client.key
- name: stopped-upgrade-604160
user:
client-certificate: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/stopped-upgrade-604160/client.crt
client-key: /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/stopped-upgrade-604160/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-600852

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-600852" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-600852"

                                                
                                                
----------------------- debugLogs end: cilium-600852 [took: 3.703031693s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-600852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-600852
E1207 23:33:18.418138  393125 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-389542/.minikube/profiles/functional-826110/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- SKIP: TestNetworkPlugins/group/cilium (3.88s)

                                                
                                    
Copied to clipboard